A collection of specialized agent skills for AI infrastructure development, enabling Claude Code to write, optimize, and debug high-performance systems.
☆94Feb 2, 2026Updated last month
Alternatives and similar repositories for infra-skills
Users that are interested in infra-skills are comparing it to the libraries listed below
Sorting:
- Persistent dense gemm for Hopper in `CuTeDSL`☆15Aug 9, 2025Updated 7 months ago
- ☆90May 31, 2025Updated 9 months ago
- ☆36Mar 7, 2025Updated last year
- DeeperGEMM: crazy optimized version☆75May 5, 2025Updated 10 months ago
- Low overhead tracing library and trace visualizer for pipelined CUDA kernels☆133Nov 26, 2025Updated 3 months ago
- NVIDIA cuTile learn☆164Dec 9, 2025Updated 3 months ago
- ☆65Apr 26, 2025Updated 10 months ago
- [NeurIPS 2025] ClusterFusion: Expanding Operator Fusion Scope for LLM Inference via Cluster-Level Collective Primitive☆66Dec 11, 2025Updated 3 months ago
- Learning TileLang with 10 puzzles!☆157Feb 25, 2026Updated 3 weeks ago
- ☆38Aug 7, 2025Updated 7 months ago
- Pipeline Parallelism Emulation and Visualization☆80Jan 8, 2026Updated 2 months ago
- Tutorial Exercises and Code for GPU Communications Tutorial at HOT Interconnects 2025☆31Oct 22, 2025Updated 4 months ago
- 清华大学电子系科协学培部Sast Tutor共享仓库☆15Apr 27, 2022Updated 3 years ago
- ☆169Feb 5, 2026Updated last month
- From Minimal GEMM to Everything☆185Feb 10, 2026Updated last month
- DeepSeek-V3.2-Exp DSA Warmup Lightning Indexer training operator based on tilelang☆44Nov 19, 2025Updated 4 months ago
- ☆20Aug 20, 2025Updated 7 months ago
- FlashTile is a CUDA Tile IR compiler that is compatible with NVIDIA's tileiras, targeting SM70 through SM121 NVIDIA GPUs.☆56Feb 6, 2026Updated last month
- ☆55Feb 5, 2026Updated last month
- Fast and memory-efficient exact kmeans☆330Updated this week
- NEO is a LLM inference engine built to save the GPU memory crisis by CPU offloading☆90Jun 16, 2025Updated 9 months ago
- ☆41Oct 15, 2025Updated 5 months ago
- RISCV C and Triton AI-Benchmark☆22Jan 28, 2026Updated last month
- Boosting GPU utilization for LLM serving via dynamic spatial-temporal prefill & decode orchestration☆37Jan 8, 2026Updated 2 months ago
- An experimental communicating attention kernel based on DeepEP.☆35Jul 29, 2025Updated 7 months ago
- Quantized Attention on GPU☆44Nov 22, 2024Updated last year
- ☆25Mar 9, 2026Updated last week
- Implement Flash Attention using Cute.☆102Dec 17, 2024Updated last year
- MSLK (Meta Superintelligence Labs Kernels) is a collection of PyTorch GPU operator libraries that are designed and optimized for GenAI tr…☆71Updated this week
- Unofficial description of the CUDA assembly (SASS) instruction sets.☆205Jul 18, 2025Updated 8 months ago
- Tile-based language built for AI computation across all scales☆138Mar 12, 2026Updated last week
- 🤖FFPA: Extend FlashAttention-2 with Split-D, ~O(1) SRAM complexity for large headdim, 1.8x~3x↑🎉 vs SDPA EA.☆253Feb 13, 2026Updated last month
- Standalone Flash Attention v2 kernel without libtorch dependency☆113Sep 10, 2024Updated last year
- Official Repo of CudaForge☆70Dec 2, 2025Updated 3 months ago
- A Survey of Efficient Attention Methods: Hardware-efficient, Sparse, Compact, and Linear Attention☆287Dec 1, 2025Updated 3 months ago
- QuTLASS: CUTLASS-Powered Quantized BLAS for Deep Learning☆170Nov 11, 2025Updated 4 months ago
- A Quirky Assortment of CuTe Kernels☆861Updated this week
- High Performance FP8 GEMM Kernels for SM89 and later GPUs.☆20Jan 24, 2025Updated last year
- A dynamic binary instrumentation tool for tracing and analyzing CUDA kernel instructions.☆35Mar 12, 2026Updated last week