High Performance LLM Inference Operator Library
☆814Feb 5, 2026Updated 2 months ago
Alternatives and similar repositories for hpc-ops
Users that are interested in hpc-ops are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- 使用 cutlass 实现 flash-attention 精简版,具有教学意义☆59Aug 12, 2024Updated last year
- Persistent dense gemm for Hopper in `CuTeDSL`☆15Aug 9, 2025Updated 8 months ago
- ☆14Nov 3, 2025Updated 5 months ago
- Distributed Compiler based on Triton for Parallel Systems☆1,403Updated this week
- FP16xINT4 LLM inference kernel that can achieve near-ideal ~4x speedups up to medium batchsizes of 16-32 tokens.☆1,051Sep 4, 2024Updated last year
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click and start building anything your business needs.
- Domain-specific language designed to streamline the development of high-performance GPU/CPU/Accelerators kernels☆5,478Updated this week
- Flash Attention in ~100 lines of CUDA (forward pass only)☆12Jun 10, 2024Updated last year
- [ICML 2023] SmoothQuant: Accurate and Efficient Post-Training Quantization for Large Language Models☆11Dec 13, 2023Updated 2 years ago
- ☆171Feb 5, 2026Updated 2 months ago
- Examples of CUDA implementations by Cutlass CuTe☆271Jul 1, 2025Updated 9 months ago
- FlashInfer: Kernel Library for LLM Serving☆5,372Updated this week
- ☆261Jul 11, 2024Updated last year
- study of cutlass☆22Nov 10, 2024Updated last year
- A standalone GEMM kernel for fp16 activation and quantized weight, extracted from FasterTransformer☆96Feb 20, 2026Updated last month
- Proton VPN Special Offer - Get 70% off • AdSpecial partner offer. Trusted by over 100 million users worldwide. Tested, Approved and Recommended by Experts.
- A high-performance kernel library for LLM training☆71Feb 11, 2026Updated 2 months ago
- 使用 cutlass 仓库在 ada 架构上实现 fp8 的 flash attention☆82Aug 12, 2024Updated last year
- Implement Flash Attention using Cute.☆105Dec 17, 2024Updated last year
- A Quirky Assortment of CuTe Kernels☆898Apr 6, 2026Updated last week
- Multiple GEMM operators are constructed with cutlass to support LLM inference.☆20Aug 3, 2025Updated 8 months ago
- Accelerating MoE with IO and Tile-aware Optimizations☆621Apr 1, 2026Updated last week
- xDiT: A Scalable Inference Engine for Diffusion Transformers (DiTs) with Massive Parallelism☆2,590Updated this week
- CUDA Templates and Python DSLs for High-Performance Linear Algebra☆9,536Apr 2, 2026Updated last week
- ☆38Aug 7, 2025Updated 8 months ago
- DigitalOcean Gradient AI Platform • AdBuild production-ready AI agents using customizable tools or access multiple LLMs through a single endpoint. Create custom knowledge bases or connect external data.
- CUTLASS and CuTe Examples☆134Nov 30, 2025Updated 4 months ago
- A fast communication-overlapping library for tensor/expert parallelism on GPUs.☆1,284Aug 28, 2025Updated 7 months ago
- how to optimize some algorithm in cuda.☆2,910Apr 1, 2026Updated last week
- flash attention tutorial written in python, triton, cuda, cutlass☆502Jan 20, 2026Updated 2 months ago
- Triton kernels for Flux☆23Jul 7, 2025Updated 9 months ago
- ☆105Sep 9, 2024Updated last year
- A Easy-to-understand TensorOp Matmul Tutorial☆422Mar 5, 2026Updated last month
- Performance of the C++ interface of flash attention and flash attention v2 in large language model (LLM) inference scenarios.☆44Feb 27, 2025Updated last year
- 📚LeetCUDA: Modern CUDA Learn Notes with PyTorch for Beginners🐑, 200+ CUDA Kernels, Tensor Cores, HGEMM, FA-2 MMA.🎉☆10,217Updated this week
- Bare Metal GPUs on DigitalOcean Gradient AI • AdPurpose-built for serious AI teams training foundational models, running large-scale inference, and pushing the boundaries of what's possible.
- An experimental communicating attention kernel based on DeepEP.☆35Jul 29, 2025Updated 8 months ago
- triton for dsa☆60Apr 2, 2026Updated last week
- A collection of specialized agent skills for AI infrastructure development, enabling Claude Code to write, optimize, and debug high-perfo…☆106Feb 2, 2026Updated 2 months ago
- [MLSys'25] QServe: W4A8KV4 Quantization and System Co-design for Efficient LLM Serving; [MLSys'25] LServe: Efficient Long-sequence LLM Se…☆822Mar 6, 2025Updated last year
- Mooncake is the serving platform for Kimi, a leading LLM service provided by Moonshot AI.☆5,071Updated this week
- ☆44Nov 1, 2025Updated 5 months ago
- NVSHMEM‑Tutorial: Build a DeepEP‑like GPU Buffer☆174Feb 11, 2026Updated 2 months ago