ByteDance-Seed / Triton-distributedLinks
Distributed Compiler based on Triton for Parallel Systems
☆1,303Updated last week
Alternatives and similar repositories for Triton-distributed
Users that are interested in Triton-distributed are comparing it to the libraries listed below
Sorting:
- FlagGems is an operator library for large language models implemented in the Triton Language.☆819Updated this week
- Perplexity GPU Kernels☆547Updated last month
- A fast communication-overlapping library for tensor/expert parallelism on GPUs.☆1,217Updated 4 months ago
- A Quirky Assortment of CuTe Kernels☆732Updated this week
- Puzzles for learning Triton, play it with minimal environment configuration!☆583Updated last week
- Zero Bubble Pipeline Parallelism☆444Updated 7 months ago
- Byted PyTorch Distributed for Hyperscale Training of LLMs and RLs☆915Updated last month
- Dynamic Memory Management for Serving LLMs without PagedAttention☆453Updated 7 months ago
- Disaggregated serving system for Large Language Models (LLMs).☆758Updated 9 months ago
- flash attention tutorial written in python, triton, cuda, cutlass☆471Updated 7 months ago
- Materials for learning SGLang☆709Updated 3 weeks ago
- Train speculative decoding models effortlessly and port them smoothly to SGLang serving.☆597Updated last week
- [MLSys'25] QServe: W4A8KV4 Quantization and System Co-design for Efficient LLM Serving; [MLSys'25] LServe: Efficient Long-sequence LLM Se…☆799Updated 10 months ago
- A throughput-oriented high-performance serving framework for LLMs☆929Updated 2 months ago
- BitBLAS is a library to support mixed-precision matrix multiplications, especially for quantized LLM deployment.☆738Updated 5 months ago
- kernels, of the mega variety☆637Updated 3 months ago
- Mirage Persistent Kernel: Compiling LLMs into a MegaKernel☆2,032Updated 2 weeks ago
- Several optimization methods of half-precision general matrix multiplication (HGEMM) using tensor core with WMMA API and MMA PTX instruct…☆511Updated last year
- A Easy-to-understand TensorOp Matmul Tutorial☆403Updated 2 months ago
- Accelerating MoE with IO and Tile-aware Optimizations☆500Updated last week
- Fastest kernels written from scratch☆507Updated 3 months ago
- KernelBench: Can LLMs Write GPU Kernels? - Benchmark + Toolkit with Torch -> CUDA (+ more DSLs)☆732Updated this week
- A collection of memory efficient attention operators implemented in the Triton language.☆287Updated last year
- Analyze the inference of Large Language Models (LLMs). Analyze aspects like computation, storage, transmission, and hardware roofline mod…☆600Updated last year
- Ring attention implementation with flash attention☆957Updated 3 months ago
- NVIDIA Inference Xfer Library (NIXL)☆788Updated this week
- MSCCL++: A GPU-driven communication stack for scalable AI applications☆446Updated last week
- USP: Unified (a.k.a. Hybrid, 2D) Sequence Parallel Attention for Long Context Transformers Model Training and Inference☆619Updated last week
- ☆337Updated this week
- Fast CUDA matrix multiplication from scratch☆996Updated 4 months ago