deepseek-ai / DualPipeLinks
A bidirectional pipeline parallelism algorithm for computation-communication overlap in DeepSeek V3/R1 training.
☆2,905Updated last week
Alternatives and similar repositories for DualPipe
Users that are interested in DualPipe are comparing it to the libraries listed below
Sorting:
- Expert Parallelism Load Balancer☆1,334Updated 9 months ago
- DeepGEMM: clean and efficient FP8 GEMM kernels with fine-grained scaling☆6,098Updated this week
- Analyze computation-communication overlap in V3/R1.☆1,136Updated 10 months ago
- DeepEP: an efficient expert-parallel communication library☆8,898Updated 3 weeks ago
- Production-tested AI infrastructure tools for efficient AGI development and community-driven innovation☆7,957Updated 8 months ago
- FlashMLA: Efficient Multi-head Latent Attention Kernels☆11,979Updated this week
- Mooncake is the serving platform for Kimi, a leading LLM service provided by Moonshot AI.☆4,600Updated this week
- Domain-specific language designed to streamline the development of high-performance GPU/CPU/Accelerators kernels☆4,739Updated this week
- A high-performance distributed file system designed to address the challenges of AI training and inference workloads.☆9,633Updated this week
- FlashInfer: Kernel Library for LLM Serving☆4,707Updated this week
- A fast communication-overlapping library for tensor/expert parallelism on GPUs.☆1,224Updated 4 months ago
- A lightweight data processing framework built on DuckDB and 3FS.☆4,901Updated 10 months ago
- MoBA: Mixture of Block Attention for Long-Context LLMs☆2,032Updated 9 months ago
- slime is an LLM post-training framework for RL Scaling.☆3,330Updated this week
- Muon is Scalable for LLM Training☆1,407Updated 5 months ago
- Lightning-Fast RL for LLM Reasoning and Agents. Made Simple & Flexible.☆3,410Updated this week
- DeepSeekMoE: Towards Ultimate Expert Specialization in Mixture-of-Experts Language Models☆1,878Updated 2 years ago
- The official repo of MiniMax-Text-01 and MiniMax-VL-01, large-language-model & vision-language-model based on Linear Attention☆3,298Updated 6 months ago
- High-performance inference framework for large language models, focusing on efficiency, flexibility, and availability.☆1,381Updated this week
- A Datacenter Scale Distributed Inference Serving Framework☆5,793Updated this week
- Conditional Memory via Scalable Lookup: A New Axis of Sparsity for Large Language Models☆3,011Updated last week
- Byted PyTorch Distributed for Hyperscale Training of LLMs and RLs☆917Updated last month
- 🐳 Efficient Triton implementations for "Native Sparse Attention: Hardware-Aligned and Natively Trainable Sparse Attention"☆956Updated 10 months ago
- Democratizing Reinforcement Learning for LLMs☆4,995Updated this week
- Mirage Persistent Kernel: Compiling LLMs into a MegaKernel☆2,084Updated last week
- My learning notes for ML SYS.☆5,077Updated this week
- Distributed Compiler based on Triton for Parallel Systems☆1,315Updated 3 weeks ago
- A compact implementation of SGLang, designed to demystify the complexities of modern LLM serving systems.☆2,924Updated 2 weeks ago
- Official Repo for Open-Reasoner-Zero☆2,085Updated 7 months ago
- Simple RL training for reasoning☆3,826Updated 3 weeks ago