☆71Mar 26, 2025Updated 11 months ago
Alternatives and similar repositories for DeepSpeed-Kernels
Users that are interested in DeepSpeed-Kernels are comparing it to the libraries listed below
Sorting:
- A standalone GEMM kernel for fp16 activation and quantized weight, extracted from FasterTransformer☆95Feb 20, 2026Updated last month
- Performance of the C++ interface of flash attention and flash attention v2 in large language model (LLM) inference scenarios.☆43Feb 27, 2025Updated last year
- MII makes low-latency and high-throughput inference possible, powered by DeepSpeed.☆2,101Jun 30, 2025Updated 8 months ago
- TileFusion is an experimental C++ macro kernel template library that elevates the abstraction level in CUDA C for tile processing.☆106Jun 28, 2025Updated 8 months ago
- High-speed GEMV kernels, at most 2.7x speedup compared to pytorch baseline.☆128Jul 13, 2024Updated last year
- Official resporitory for "IPDPS' 24 QSync: Quantization-Minimized Synchronous Distributed Training Across Hybrid Devices".☆20Feb 23, 2024Updated 2 years ago
- ☆261Jul 11, 2024Updated last year
- Persistent dense gemm for Hopper in `CuTeDSL`☆15Aug 9, 2025Updated 7 months ago
- ☆25Mar 15, 2023Updated 3 years ago
- ☆20Sep 28, 2024Updated last year
- Perplexity GPU Kernels☆566Nov 7, 2025Updated 4 months ago
- Decoding Attention is specially optimized for MHA, MQA, GQA and MLA using CUDA core for the decoding stage of LLM inference.☆45Jun 11, 2025Updated 9 months ago
- MSCCL++: A GPU-driven communication stack for scalable AI applications☆481Updated this week
- ☆13Jan 7, 2025Updated last year
- CUDA 12.2 HMM demos☆20Jul 26, 2024Updated last year
- A low-latency & high-throughput serving engine for LLMs☆484Jan 8, 2026Updated 2 months ago
- FP16xINT4 LLM inference kernel that can achieve near-ideal ~4x speedups up to medium batchsizes of 16-32 tokens.☆1,041Sep 4, 2024Updated last year
- IntLLaMA: A fast and light quantization solution for LLaMA☆18Jul 21, 2023Updated 2 years ago
- ☆131Nov 11, 2024Updated last year
- DeepGEMM: clean and efficient FP8 GEMM kernels with fine-grained scaling☆21Updated this week
- Dynamic Memory Management for Serving LLMs without PagedAttention☆466May 30, 2025Updated 9 months ago
- Implementation of IceFormer: Accelerated Inference with Long-Sequence Transformers on CPUs (ICLR 2024).☆25Feb 22, 2026Updated 3 weeks ago
- Sample Codes using NVSHMEM on Multi-GPU☆30Jan 22, 2023Updated 3 years ago
- GPTQ inference Triton kernel☆321May 18, 2023Updated 2 years ago
- Tritonbench is a collection of PyTorch custom operators with example inputs to measure their performance.☆330Updated this week
- ☆105Sep 9, 2024Updated last year
- ☆25Feb 20, 2024Updated 2 years ago
- [MLSys'24] Atom: Low-bit Quantization for Efficient and Accurate LLM Serving☆335Jul 2, 2024Updated last year
- ⚡️Write HGEMM from scratch using Tensor Cores with WMMA, MMA and CuTe API, Achieve Peak⚡️ Performance.☆149May 10, 2025Updated 10 months ago
- ☆11Apr 5, 2021Updated 4 years ago
- extensible collectives library in triton☆97Mar 31, 2025Updated 11 months ago
- Stateful LLM Serving☆97Mar 11, 2025Updated last year
- Expert Specialization MoE Solution based on CUTLASS☆27Jan 19, 2026Updated 2 months ago
- optimized BERT transformer inference on NVIDIA GPU. https://arxiv.org/abs/2210.03052☆477Mar 15, 2024Updated 2 years ago
- Standalone Flash Attention v2 kernel without libtorch dependency☆113Sep 10, 2024Updated last year
- ☆105Mar 12, 2026Updated last week
- Microsoft Collective Communication Library☆387Sep 20, 2023Updated 2 years ago
- Torch Distributed Experimental☆117Aug 5, 2024Updated last year
- A Easy-to-understand TensorOp Matmul Tutorial☆409Mar 5, 2026Updated 2 weeks ago