High Performance LLM Inference Operator Library
☆788Feb 5, 2026Updated last month
Alternatives and similar repositories for hpc-ops
Users that are interested in hpc-ops are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- 使用 cutlass 实现 flash-attention 精简版,具有教学意义☆59Aug 12, 2024Updated last year
- Persistent dense gemm for Hopper in `CuTeDSL`☆15Aug 9, 2025Updated 7 months ago
- ☆13Nov 3, 2025Updated 4 months ago
- Distributed Compiler based on Triton for Parallel Systems☆1,394Mar 11, 2026Updated last week
- Flash Attention in ~100 lines of CUDA (forward pass only)☆10Jun 10, 2024Updated last year
- [ICML 2023] SmoothQuant: Accurate and Efficient Post-Training Quantization for Large Language Models☆11Dec 13, 2023Updated 2 years ago
- FP16xINT4 LLM inference kernel that can achieve near-ideal ~4x speedups up to medium batchsizes of 16-32 tokens.☆1,041Sep 4, 2024Updated last year
- Domain-specific language designed to streamline the development of high-performance GPU/CPU/Accelerators kernels☆5,403Updated this week
- ☆169Feb 5, 2026Updated last month
- FlashInfer: Kernel Library for LLM Serving☆5,194Updated this week
- Examples of CUDA implementations by Cutlass CuTe☆270Jul 1, 2025Updated 8 months ago
- ☆261Jul 11, 2024Updated last year
- A high-performance kernel library for LLM training☆67Feb 11, 2026Updated last month
- 使用 cutlass 仓库在 ada 架构上实现 fp8 的 flash attention☆81Aug 12, 2024Updated last year
- Implement Flash Attention using Cute.☆102Dec 17, 2024Updated last year
- A Quirky Assortment of CuTe Kernels☆863Updated this week
- Multiple GEMM operators are constructed with cutlass to support LLM inference.☆19Aug 3, 2025Updated 7 months ago
- Accelerating MoE with IO and Tile-aware Optimizations☆613Updated this week
- CUDA Templates and Python DSLs for High-Performance Linear Algebra☆9,484Updated this week
- ☆38Aug 7, 2025Updated 7 months ago
- A standalone GEMM kernel for fp16 activation and quantized weight, extracted from FasterTransformer☆95Feb 20, 2026Updated last month
- xDiT: A Scalable Inference Engine for Diffusion Transformers (DiTs) with Massive Parallelism☆2,572Updated this week
- A collection of specialized agent skills for AI infrastructure development, enabling Claude Code to write, optimize, and debug high-perfo…☆94Feb 2, 2026Updated last month
- CUTLASS and CuTe Examples☆132Nov 30, 2025Updated 3 months ago
- flash attention tutorial written in python, triton, cuda, cutlass☆491Jan 20, 2026Updated 2 months ago
- Triton kernels for Flux☆22Jul 7, 2025Updated 8 months ago
- how to optimize some algorithm in cuda.☆2,872Mar 17, 2026Updated last week
- A fast communication-overlapping library for tensor/expert parallelism on GPUs.☆1,273Aug 28, 2025Updated 6 months ago
- Performance of the C++ interface of flash attention and flash attention v2 in large language model (LLM) inference scenarios.☆44Feb 27, 2025Updated last year
- An experimental communicating attention kernel based on DeepEP.☆35Jul 29, 2025Updated 7 months ago
- triton for dsa☆60Updated this week
- 📚LeetCUDA: Modern CUDA Learn Notes with PyTorch for Beginners🐑, 200+ CUDA Kernels, Tensor Cores, HGEMM, FA-2 MMA.🎉☆9,932Updated this week
- Mooncake is the serving platform for Kimi, a leading LLM service provided by Moonshot AI.☆4,953Updated this week
- [MLSys'25] QServe: W4A8KV4 Quantization and System Co-design for Efficient LLM Serving; [MLSys'25] LServe: Efficient Long-sequence LLM Se…☆821Mar 6, 2025Updated last year
- ☆44Nov 1, 2025Updated 4 months ago
- NVSHMEM‑Tutorial: Build a DeepEP‑like GPU Buffer☆170Feb 11, 2026Updated last month
- study of cutlass☆22Nov 10, 2024Updated last year
- A Easy-to-understand TensorOp Matmul Tutorial☆409Mar 5, 2026Updated 2 weeks ago
- Fastest kernels written from scratch☆561Sep 18, 2025Updated 6 months ago