deepseek-ai / DualPipeLinks
A bidirectional pipeline parallelism algorithm for computation-communication overlap in V3/R1 training.
☆2,856Updated 6 months ago
Alternatives and similar repositories for DualPipe
Users that are interested in DualPipe are comparing it to the libraries listed below
Sorting:
- Expert Parallelism Load Balancer☆1,263Updated 5 months ago
- DeepGEMM: clean and efficient FP8 GEMM kernels with fine-grained scaling☆5,695Updated last week
- Analyze computation-communication overlap in V3/R1.☆1,097Updated 5 months ago
- DeepEP: an efficient expert-parallel communication library☆8,496Updated this week
- Production-tested AI infrastructure tools for efficient AGI development and community-driven innovation☆7,909Updated 4 months ago
- FlashMLA: Efficient MLA kernels☆11,722Updated 2 weeks ago
- Mooncake is the serving platform for Kimi, a leading LLM service provided by Moonshot AI.☆3,912Updated this week
- FlashInfer: Kernel Library for LLM Serving☆3,723Updated this week
- A high-performance distributed file system designed to address the challenges of AI training and inference workloads.☆9,286Updated this week
- A fast communication-overlapping library for tensor/expert parallelism on GPUs.☆1,106Updated 2 weeks ago
- MoBA: Mixture of Block Attention for Long-Context LLMs☆1,899Updated 5 months ago
- Muon is Scalable for LLM Training☆1,302Updated last month
- A lightweight data processing framework built on DuckDB and 3FS.☆4,772Updated 6 months ago
- DeepSeekMoE: Towards Ultimate Expert Specialization in Mixture-of-Experts Language Models☆1,787Updated last year
- High-performance inference framework for large language models, focusing on efficiency, flexibility, and availability.☆1,261Updated this week
- A Datacenter Scale Distributed Inference Serving Framework☆4,940Updated this week
- Domain-specific language designed to streamline the development of high-performance GPU/CPU/Accelerators kernels☆1,608Updated this week
- The official repo of MiniMax-Text-01 and MiniMax-VL-01, large-language-model & vision-language-model based on Linear Attention☆3,142Updated 2 months ago
- Mirage Persistent Kernel: Compiling LLMs into a MegaKernel☆1,773Updated this week
- slime is a LLM post-training framework for RL Scaling.☆1,747Updated this week
- Distributed Compiler based on Triton for Parallel Systems☆1,107Updated this week
- Distributed RL System for LLM Reasoning☆2,569Updated this week
- 🐳 Efficient Triton implementations for "Native Sparse Attention: Hardware-Aligned and Natively Trainable Sparse Attention"☆847Updated 5 months ago
- Community maintained hardware plugin for vLLM on Ascend☆1,092Updated this week
- A PyTorch Native LLM Training Framework☆863Updated 2 months ago
- Fast, Flexible and Portable Structured Generation☆1,233Updated this week
- The official repo of Pai-Megatron-Patch for LLM & VLM large scale training developed by Alibaba Cloud.☆1,341Updated this week
- Nano vLLM☆6,553Updated 2 weeks ago
- Expert Specialized Fine-Tuning☆696Updated 3 months ago
- Transformers-compatible library for applying various compression algorithms to LLMs for optimized deployment with vLLM☆1,919Updated last week