Zero Bubble Pipeline Parallelism
☆451May 7, 2025Updated 9 months ago
Alternatives and similar repositories for zero-bubble-pipeline-parallelism
Users that are interested in zero-bubble-pipeline-parallelism are comparing it to the libraries listed below
Sorting:
- Ring attention implementation with flash attention☆987Sep 10, 2025Updated 5 months ago
- USP: Unified (a.k.a. Hybrid, 2D) Sequence Parallel Attention for Long Context Transformers Model Training and Inference☆644Jan 15, 2026Updated last month
- Official repository for DistFlashAttn: Distributed Memory-efficient Attention for Long-context LLMs Training☆222Aug 19, 2024Updated last year
- A fast communication-overlapping library for tensor/expert parallelism on GPUs.☆1,264Aug 28, 2025Updated 6 months ago
- Distributed Compiler based on Triton for Parallel Systems☆1,371Feb 13, 2026Updated 3 weeks ago
- Byted PyTorch Distributed for Hyperscale Training of LLMs and RLs☆938Nov 27, 2025Updated 3 months ago
- A library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit and 4-bit floating point (FP8 and FP4) precision on H…☆3,176Updated this week
- Vocabulary Parallelism☆25Mar 10, 2025Updated 11 months ago
- Chimera: bidirectional pipeline parallelism for efficiently training large-scale models.☆70Mar 20, 2025Updated 11 months ago
- Sequence-level 1F1B schedule for LLMs.☆38Aug 26, 2025Updated 6 months ago
- A bidirectional pipeline parallelism algorithm for computation-communication overlap in DeepSeek V3/R1 training.☆2,926Jan 14, 2026Updated last month
- Pipeline Parallelism for PyTorch☆786Aug 21, 2024Updated last year
- PyTorch bindings for CUTLASS grouped GEMM.☆185Feb 19, 2026Updated 2 weeks ago
- LLM training technologies developed by kwai☆70Jan 21, 2026Updated last month
- ☆78May 4, 2021Updated 4 years ago
- Pipeline Parallelism Emulation and Visualization☆79Jan 8, 2026Updated last month
- FlashInfer: Kernel Library for LLM Serving☆5,057Updated this week
- Ongoing research training transformer language models at scale, including: BERT & GPT-2☆2,230Aug 14, 2025Updated 6 months ago
- Microsoft Automatic Mixed Precision Library☆636Dec 1, 2025Updated 3 months ago
- Automatically Discovering Fast Parallelization Strategies for Distributed Deep Neural Network Training☆1,863Updated this week
- Ongoing research training transformer models at scale☆15,461Updated this week
- Best practice for training LLaMA models in Megatron-LM☆663Jan 2, 2024Updated 2 years ago
- A lightweight design for computation-communication overlap.☆223Jan 20, 2026Updated last month
- Training and serving large-scale neural networks with auto parallelization.☆3,184Dec 9, 2023Updated 2 years ago
- Estimate MFU for DeepSeekV3☆26Jan 5, 2025Updated last year
- Analyze computation-communication overlap in V3/R1.☆1,143Mar 21, 2025Updated 11 months ago
- A Easy-to-understand TensorOp Matmul Tutorial☆409Updated this week
- Perplexity GPU Kernels☆567Nov 7, 2025Updated 3 months ago
- A PyTorch native platform for training generative AI models☆5,098Updated this week
- The official repo of Pai-Megatron-Patch for LLM & VLM large scale training developed by Alibaba Cloud.☆1,534Dec 15, 2025Updated 2 months ago
- Triton-based implementation of Sparse Mixture of Experts.☆268Oct 3, 2025Updated 5 months ago
- Fast low-bit matmul kernels in Triton☆436Feb 1, 2026Updated last month
- Large Context Attention☆769Oct 13, 2025Updated 4 months ago
- Ongoing research training transformer models at scale☆18Updated this week
- A high-performance distributed deep learning system targeting large-scale and automated distributed training.☆333Dec 13, 2025Updated 2 months ago
- InternEvo is an open-sourced lightweight training framework aims to support model pre-training without the need for extensive dependencie…☆418Aug 21, 2025Updated 6 months ago
- A library to analyze PyTorch traces.☆472Feb 4, 2026Updated last month
- A throughput-oriented high-performance serving framework for LLMs☆947Oct 29, 2025Updated 4 months ago
- nnScaler: Compiling DNN models for Parallel Training☆124Sep 23, 2025Updated 5 months ago