Distributed Compiler based on Triton for Parallel Systems
☆1,371Feb 13, 2026Updated 2 weeks ago
Alternatives and similar repositories for Triton-distributed
Users that are interested in Triton-distributed are comparing it to the libraries listed below
Sorting:
- A fast communication-overlapping library for tensor/expert parallelism on GPUs.☆1,261Aug 28, 2025Updated 6 months ago
- FlashInfer: Kernel Library for LLM Serving☆5,057Updated this week
- Perplexity GPU Kernels☆567Nov 7, 2025Updated 3 months ago
- Domain-specific language designed to streamline the development of high-performance GPU/CPU/Accelerators kernels☆5,284Updated this week
- A lightweight design for computation-communication overlap.☆223Jan 20, 2026Updated last month
- Tile primitives for speedy kernels☆3,202Feb 24, 2026Updated last week
- DeeperGEMM: crazy optimized version☆74May 5, 2025Updated 9 months ago
- Mirage Persistent Kernel: Compiling LLMs into a MegaKernel☆2,145Feb 23, 2026Updated last week
- A throughput-oriented high-performance serving framework for LLMs☆947Oct 29, 2025Updated 4 months ago
- ☆65Apr 26, 2025Updated 10 months ago
- A Quirky Assortment of CuTe Kernels☆838Updated this week
- MSCCL++: A GPU-driven communication stack for scalable AI applications☆475Updated this week
- FlagGems is an operator library for large language models implemented in the Triton Language.☆909Updated this week
- FP16xINT4 LLM inference kernel that can achieve near-ideal ~4x speedups up to medium batchsizes of 16-32 tokens.☆1,025Sep 4, 2024Updated last year
- ByteCheckpoint: An Unified Checkpointing Library for LFMs☆270Feb 2, 2026Updated last month
- NVSHMEM‑Tutorial: Build a DeepEP‑like GPU Buffer☆163Feb 11, 2026Updated 3 weeks ago
- Tritonbench is a collection of PyTorch custom operators with example inputs to measure their performance.☆327Updated this week
- A library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit and 4-bit floating point (FP8 and FP4) precision on H…☆3,176Updated this week
- Ring attention implementation with flash attention☆986Sep 10, 2025Updated 5 months ago
- A Distributed Attention Towards Linear Scalability for Ultra-Long Context, Heterogeneous Data Training☆650Updated this week
- A Easy-to-understand TensorOp Matmul Tutorial☆410Feb 11, 2026Updated 3 weeks ago
- A Python-embedded DSL that makes it easy to write fast, scalable ML kernels with minimal boilerplate.☆774Updated this week
- Mooncake is the serving platform for Kimi, a leading LLM service provided by Moonshot AI.☆4,843Updated this week
- DeepGEMM: clean and efficient FP8 GEMM kernels with fine-grained scaling☆6,206Updated this week
- BitBLAS is a library to support mixed-precision matrix multiplications, especially for quantized LLM deployment.☆751Aug 6, 2025Updated 6 months ago
- USP: Unified (a.k.a. Hybrid, 2D) Sequence Parallel Attention for Long Context Transformers Model Training and Inference☆644Jan 15, 2026Updated last month
- A bidirectional pipeline parallelism algorithm for computation-communication overlap in DeepSeek V3/R1 training.☆2,926Jan 14, 2026Updated last month
- CUDA Templates and Python DSLs for High-Performance Linear Algebra☆9,348Updated this week
- kernels, of the mega variety☆684Updated this week
- ☆160Dec 27, 2024Updated last year
- 🚀 Efficient implementations of state-of-the-art linear attention models☆4,474Updated this week
- Shared Middle-Layer for Triton Compilation☆329Dec 5, 2025Updated 2 months ago
- DeepEP: an efficient expert-parallel communication library☆9,005Feb 9, 2026Updated 3 weeks ago
- how to optimize some algorithm in cuda.☆2,825Feb 15, 2026Updated 2 weeks ago
- Byted PyTorch Distributed for Hyperscale Training of LLMs and RLs☆938Nov 27, 2025Updated 3 months ago
- Dynamic Memory Management for Serving LLMs without PagedAttention☆464May 30, 2025Updated 9 months ago
- [MLSys'25] QServe: W4A8KV4 Quantization and System Co-design for Efficient LLM Serving; [MLSys'25] LServe: Efficient Long-sequence LLM Se…☆817Mar 6, 2025Updated 11 months ago
- NVIDIA NVSHMEM is a parallel programming interface for NVIDIA GPUs based on OpenSHMEM. NVSHMEM can significantly reduce multi-process com…☆469Updated this week
- A PyTorch native platform for training generative AI models☆5,098Updated this week