deepseek-ai / EPLBLinks
Expert Parallelism Load Balancer
☆1,279Updated 7 months ago
Alternatives and similar repositories for EPLB
Users that are interested in EPLB are comparing it to the libraries listed below
Sorting:
- Analyze computation-communication overlap in V3/R1.☆1,105Updated 7 months ago
- A bidirectional pipeline parallelism algorithm for computation-communication overlap in DeepSeek V3/R1 training.☆2,869Updated 7 months ago
- DeepGEMM: clean and efficient FP8 GEMM kernels with fine-grained scaling☆5,812Updated last week
- A fast communication-overlapping library for tensor/expert parallelism on GPUs.☆1,145Updated last month
- Mooncake is the serving platform for Kimi, a leading LLM service provided by Moonshot AI.☆4,124Updated this week
- DeepEP: an efficient expert-parallel communication library☆8,630Updated this week
- FlashInfer: Kernel Library for LLM Serving☆3,952Updated this week
- Distributed Compiler based on Triton for Parallel Systems☆1,186Updated last week
- A PyTorch Native LLM Training Framework☆875Updated last month
- Domain-specific language designed to streamline the development of high-performance GPU/CPU/Accelerators kernels☆3,658Updated this week
- slime is an LLM post-training framework for RL Scaling.☆2,232Updated this week
- Materials for learning SGLang☆615Updated 3 weeks ago
- Muon is Scalable for LLM Training☆1,336Updated 2 months ago
- Production-tested AI infrastructure tools for efficient AGI development and community-driven innovation☆7,923Updated 5 months ago
- DeepSeekMoE: Towards Ultimate Expert Specialization in Mixture-of-Experts Language Models☆1,808Updated last year
- A throughput-oriented high-performance serving framework for LLMs☆904Updated last month
- Disaggregated serving system for Large Language Models (LLMs).☆706Updated 6 months ago
- MoBA: Mixture of Block Attention for Long-Context LLMs☆1,941Updated 6 months ago
- High-performance inference framework for large language models, focusing on efficiency, flexibility, and availability.☆1,307Updated this week
- Ring attention implementation with flash attention☆901Updated last month
- Perplexity GPU Kernels☆497Updated last month
- Train speculative decoding models effortlessly and port them smoothly to SGLang serving.☆439Updated this week
- Mirage Persistent Kernel: Compiling LLMs into a MegaKernel☆1,891Updated this week
- NVIDIA Inference Xfer Library (NIXL)☆673Updated this week
- 🐳 Efficient Triton implementations for "Native Sparse Attention: Hardware-Aligned and Natively Trainable Sparse Attention"☆903Updated 7 months ago
- Efficient and easy multi-instance LLM serving☆497Updated last month
- Lightning-Fast RL for LLM Reasoning and Agents. Made Simple & Flexible.☆2,853Updated this week
- RTP-LLM: Alibaba's high-performance LLM inference engine for diverse applications.☆892Updated this week
- FlashMLA: Efficient Multi-head Latent Attention Kernels☆11,819Updated 3 weeks ago
- ☆817Updated 4 months ago