ai-dynamo / dynamo
A Datacenter Scale Distributed Inference Serving Framework
☆3,931Updated this week
Alternatives and similar repositories for dynamo:
Users that are interested in dynamo are comparing it to the libraries listed below
- FlashInfer: Kernel Library for LLM Serving☆2,788Updated this week
- A bidirectional pipeline parallelism algorithm for computation-communication overlap in V3/R1 training.☆2,751Updated last month
- vLLM’s reference system for K8S-native cluster-wide deployment with community-driven performance optimization☆1,159Updated this week
- Mooncake is the serving platform for Kimi, a leading LLM service provided by Moonshot AI.☆3,184Updated this week
- Redis for LLMs☆951Updated this week
- Transformers-compatible library for applying various compression algorithms to LLMs for optimized deployment with vLLM☆1,301Updated this week
- Expert Parallelism Load Balancer☆1,161Updated last month
- Production-tested AI infrastructure tools for efficient AGI development and community-driven innovation☆7,747Updated 3 weeks ago
- Analyze computation-communication overlap in V3/R1.☆1,012Updated last month
- Cost-efficient and pluggable Infrastructure components for GenAI inference☆3,515Updated this week
- SGLang is a fast serving framework for large language models and vision language models.☆13,976Updated this week
- A lightweight data processing framework built on DuckDB and 3FS.☆4,601Updated 2 months ago
- DeepGEMM: clean and efficient FP8 GEMM kernels with fine-grained scaling☆5,285Updated last week
- verl: Volcano Engine Reinforcement Learning for LLMs☆7,626Updated this week
- DeepEP: an efficient expert-parallel communication library☆7,531Updated last week
- A high-performance distributed file system designed to address the challenges of AI training and inference workloads.☆8,828Updated 2 weeks ago
- PyTorch native quantization and sparsity for training and inference☆2,015Updated this week
- Multi-LoRA inference server that scales to 1000s of fine-tuned LLMs☆2,965Updated this week
- Democratizing Reinforcement Learning for LLMs☆3,182Updated 3 weeks ago
- MoBA: Mixture of Block Attention for Long-Context LLMs☆1,768Updated last month
- Efficient Triton Kernels for LLM Training☆4,960Updated this week
- A fast communication-overlapping library for tensor/expert parallelism on GPUs.☆912Updated 3 weeks ago
- Muon is Scalable for LLM Training☆1,039Updated last month
- Domain-specific language designed to streamline the development of high-performance GPU/CPU/Accelerators kernels☆1,089Updated this week
- LMDeploy is a toolkit for compressing, deploying, and serving LLMs.☆6,303Updated this week
- Minimalistic 4D-parallelism distributed training framework for education purpose☆1,445Updated 2 months ago
- A library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit floating point (FP8) precision on Hopper, Ada and Bla…☆2,393Updated this week
- Minimalistic large language model 3D-parallelism training☆1,836Updated this week
- 📚A curated list of Awesome LLM/VLM Inference Papers with codes: WINT8/4, FlashAttention, PagedAttention, MLA, Parallelism etc.☆3,943Updated last week
- Sky-T1: Train your own O1 preview model within $450☆3,232Updated 2 weeks ago