A collection of specialized agent skills for AI infrastructure development, enabling Claude Code to write, optimize, and debug high-performance systems.
☆121Apr 15, 2026Updated 2 weeks ago
Alternatives and similar repositories for infra-skills
Users that are interested in infra-skills are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Fast and memory-efficient exact attention☆21Apr 10, 2026Updated 2 weeks ago
- ☆98May 31, 2025Updated 11 months ago
- ☆36Mar 7, 2025Updated last year
- DeeperGEMM: crazy optimized version☆86May 5, 2025Updated 11 months ago
- HALO: Hadamard-Assisted Low-Precision Optimization and Training method for finetuning LLMs. 🚀 The official implementation of https://arx…☆28Feb 17, 2025Updated last year
- Wordpress hosting with auto-scaling - Free Trial Offer • AdFully Managed hosting for WordPress and WooCommerce businesses that need reliable, auto-scalable performance. Cloudways SafeUpdates now available.
- Low overhead tracing library and trace visualizer for pipelined CUDA kernels☆136Nov 26, 2025Updated 5 months ago
- An experimental communicating attention kernel based on DeepEP.☆34Jul 29, 2025Updated 9 months ago
- NVIDIA cuTile learn☆167Dec 9, 2025Updated 4 months ago
- ☆66Apr 26, 2025Updated last year
- [NeurIPS 2025] ClusterFusion: Expanding Operator Fusion Scope for LLM Inference via Cluster-Level Collective Primitive☆68Dec 11, 2025Updated 4 months ago
- ☆48Dec 13, 2025Updated 4 months ago
- ☆38Aug 7, 2025Updated 8 months ago
- Pipeline Parallelism Emulation and Visualization☆81Jan 8, 2026Updated 3 months ago
- QuTLASS: CUTLASS-Powered Quantized BLAS for Deep Learning☆178Nov 11, 2025Updated 5 months ago
- Managed hosting for WordPress and PHP on Cloudways • AdManaged hosting for WordPress, Magento, Laravel, or PHP apps, on multiple cloud providers. Deploy in minutes on Cloudways by DigitalOcean.
- Tutorial Exercises and Code for GPU Communications Tutorial at HOT Interconnects 2025☆31Oct 22, 2025Updated 6 months ago
- 清华大学电子系科协学培部Sast Tutor共享仓库☆15Apr 27, 2022Updated 4 years ago
- For audio visualization and playback in Jupyter notebooks.☆17Nov 25, 2025Updated 5 months ago
- Learning TileLang with 10 puzzles!☆235Updated this week
- ☆173Feb 5, 2026Updated 2 months ago
- From Minimal GEMM to Everything☆202Feb 10, 2026Updated 2 months ago
- DeepSeek-V3.2-Exp DSA Warmup Lightning Indexer training operator based on tilelang☆44Nov 19, 2025Updated 5 months ago
- ☆23Aug 20, 2025Updated 8 months ago
- FlashTile is a CUDA Tile IR compiler that is compatible with NVIDIA's tileiras, targeting SM70 through SM121 NVIDIA GPUs.☆60Feb 6, 2026Updated 2 months ago
- Deploy to Railway using AI coding agents - Free Credits Offer • AdUse Claude Code, Codex, OpenCode, and more. Autonomous software development now has the infrastructure to match with Railway.
- Fast and memory-efficient exact kmeans☆541Apr 17, 2026Updated last week
- ☆25Jun 19, 2025Updated 10 months ago
- NEO is a LLM inference engine built to save the GPU memory crisis by CPU offloading☆94Jun 16, 2025Updated 10 months ago
- ☆27Nov 25, 2025Updated 5 months ago
- ☆65Feb 5, 2026Updated 2 months ago
- A Quirky Assortment of CuTe Kernels☆948Updated this week
- ☆44Oct 15, 2025Updated 6 months ago
- ☆119May 19, 2025Updated 11 months ago
- Boosting GPU utilization for LLM serving via dynamic spatial-temporal prefill & decode orchestration☆44Jan 8, 2026Updated 3 months ago
- GPU virtual machines on DigitalOcean Gradient AI • AdGet to production fast with high-performance AMD and NVIDIA GPUs you can spin up in seconds. The definition of operational simplicity.
- RISCV C and Triton AI-Benchmark☆24Jan 28, 2026Updated 3 months ago
- Quantized Attention on GPU☆44Nov 22, 2024Updated last year
- NVSHMEM‑Tutorial: Build a DeepEP‑like GPU Buffer☆180Feb 11, 2026Updated 2 months ago
- Unofficial description of the CUDA assembly (SASS) instruction sets.☆211Jul 18, 2025Updated 9 months ago
- Implement Flash Attention using Cute.☆106Dec 17, 2024Updated last year
- FFPA: Extend FlashAttention-2 with Split-D, ~O(1) SRAM complexity for large headdim, 1.8x~3x↑🎉 vs SDPA.☆276Updated this week
- Standalone Flash Attention v2 kernel without libtorch dependency☆113Sep 10, 2024Updated last year