deepseek-ai / EPLBLinks
Expert Parallelism Load Balancer
☆1,322Updated 9 months ago
Alternatives and similar repositories for EPLB
Users that are interested in EPLB are comparing it to the libraries listed below
Sorting:
- Analyze computation-communication overlap in V3/R1.☆1,128Updated 9 months ago
- A bidirectional pipeline parallelism algorithm for computation-communication overlap in DeepSeek V3/R1 training.☆2,892Updated 9 months ago
- DeepGEMM: clean and efficient FP8 GEMM kernels with fine-grained scaling☆5,989Updated this week
- A fast communication-overlapping library for tensor/expert parallelism on GPUs.☆1,208Updated 3 months ago
- DeepEP: an efficient expert-parallel communication library☆8,826Updated this week
- Byted PyTorch Distributed for Hyperscale Training of LLMs and RLs☆910Updated 3 weeks ago
- Distributed Compiler based on Triton for Parallel Systems☆1,280Updated last week
- FlashInfer: Kernel Library for LLM Serving☆4,285Updated last week
- Mooncake is the serving platform for Kimi, a leading LLM service provided by Moonshot AI.☆4,471Updated this week
- Domain-specific language designed to streamline the development of high-performance GPU/CPU/Accelerators kernels☆4,289Updated this week
- Materials for learning SGLang☆693Updated last week
- MoBA: Mixture of Block Attention for Long-Context LLMs☆2,022Updated 8 months ago
- slime is an LLM post-training framework for RL Scaling.☆2,911Updated this week
- Muon is Scalable for LLM Training☆1,387Updated 4 months ago
- Mirage Persistent Kernel: Compiling LLMs into a MegaKernel☆2,004Updated this week
- Disaggregated serving system for Large Language Models (LLMs).☆754Updated 8 months ago
- A throughput-oriented high-performance serving framework for LLMs☆924Updated last month
- [NeurIPS'24 Spotlight, ICLR'25, ICML'25] To speed up Long-context LLMs' inference, approximate and dynamic sparse calculate the attention…☆1,166Updated 2 months ago
- 🐳 Efficient Triton implementations for "Native Sparse Attention: Hardware-Aligned and Natively Trainable Sparse Attention"☆944Updated 9 months ago
- High-performance inference framework for large language models, focusing on efficiency, flexibility, and availability.☆1,357Updated this week
- DeepSeekMoE: Towards Ultimate Expert Specialization in Mixture-of-Experts Language Models☆1,856Updated last year
- NVIDIA Inference Xfer Library (NIXL)☆778Updated this week
- Perplexity GPU Kernels☆542Updated last month
- Production-tested AI infrastructure tools for efficient AGI development and community-driven innovation☆7,947Updated 7 months ago
- FlagGems is an operator library for large language models implemented in the Triton Language.☆803Updated this week
- Train speculative decoding models effortlessly and port them smoothly to SGLang serving.☆557Updated last week
- Ring attention implementation with flash attention☆949Updated 3 months ago
- RTP-LLM: Alibaba's high-performance LLM inference engine for diverse applications.☆949Updated this week
- FlagScale is a large model toolkit based on open-sourced projects.☆426Updated this week
- ☆1,087Updated this week