Chimera: bidirectional pipeline parallelism for efficiently training large-scale models.
☆70Mar 20, 2025Updated last year
Alternatives and similar repositories for Chimera
Users that are interested in Chimera are comparing it to the libraries listed below
Sorting:
- Bamboo is a system for running large pipeline-parallel DNNs affordably, reliably, and efficiently using spot instances.☆55Dec 11, 2022Updated 3 years ago
- An Efficient Pipelined Data Parallel Approach for Training Large Model☆75Dec 11, 2020Updated 5 years ago
- ☆84Feb 11, 2026Updated last month
- Ok-Topk is a scheme for distributed training with sparse gradients. Ok-Topk integrates a novel sparse allreduce algorithm (less than 6k c…☆27Dec 10, 2022Updated 3 years ago
- Zero Bubble Pipeline Parallelism☆451May 7, 2025Updated 10 months ago
- RaNNC is an automatic parallelization middleware used to train very large-scale neural networks.☆57Oct 15, 2022Updated 3 years ago
- Hi-Speed DNN Training with Espresso: Unleashing the Full Potential of Gradient Compression with Near-Optimal Usage Strategies (EuroSys '2…☆15Sep 21, 2023Updated 2 years ago
- ☆10Apr 29, 2023Updated 2 years ago
- PipeTransformer: Automated Elastic Pipelining for Distributed Training of Large-scale Models. ICML 2021☆56Jul 21, 2021Updated 4 years ago
- ☆392Nov 4, 2022Updated 3 years ago
- Automated Parallelization System and Infrastructure for Multiple Ecosystems☆81Nov 19, 2024Updated last year
- Accommodating Large Language Model Training over Heterogeneous Environment.☆25Mar 13, 2025Updated last year
- ☆78May 4, 2021Updated 4 years ago
- ☆48Aug 6, 2024Updated last year
- DELTA-pytorch:DELTA: Dynamically Optimizing GPU Memory beyond Tensor Recomputation☆12Apr 16, 2024Updated last year
- Official implementation for the paper Lancet: Accelerating Mixture-of-Experts Training via Whole Graph Computation-Communication Overlapp…☆14Nov 17, 2025Updated 4 months ago
- Official resporitory for "IPDPS' 24 QSync: Quantization-Minimized Synchronous Distributed Training Across Hybrid Devices".☆20Feb 23, 2024Updated 2 years ago
- ☆22Apr 22, 2024Updated last year
- nnScaler: Compiling DNN models for Parallel Training☆126Sep 23, 2025Updated 5 months ago
- An experimental parallel training platform☆56Mar 25, 2024Updated last year
- ☆47Dec 13, 2024Updated last year
- FTPipe and related pipeline model parallelism research.☆44May 16, 2023Updated 2 years ago
- Ongoing research training transformer models at scale☆18Updated this week
- FlashSparse significantly reduces the computation redundancy for unstructured sparsity (for SpMM and SDDMM) on Tensor Cores through a Swa…☆39Oct 5, 2025Updated 5 months ago
- ComScribe is a tool to identify communication among all GPU-GPU and CPU-GPU pairs in a single-node multi-GPU system.☆27Jul 6, 2023Updated 2 years ago
- A baseline repository of Auto-Parallelism in Training Neural Networks☆147Jun 25, 2022Updated 3 years ago
- ☆17Jan 15, 2026Updated 2 months ago
- Synthesizer for optimal collective communication algorithms☆123Apr 8, 2024Updated last year
- ☆17Dec 9, 2022Updated 3 years ago
- Pipeline Parallelism for PyTorch☆785Aug 21, 2024Updated last year
- Official repository for the paper DynaPipe: Optimizing Multi-task Training through Dynamic Pipelines☆19Dec 8, 2023Updated 2 years ago
- A resilient distributed training framework☆97Apr 11, 2024Updated last year
- ☆25Apr 3, 2023Updated 2 years ago
- ☆28Jul 11, 2021Updated 4 years ago
- TACCL: Guiding Collective Algorithm Synthesis using Communication Sketches☆80Jul 25, 2023Updated 2 years ago
- ☆27Aug 31, 2023Updated 2 years ago
- A Python library transfers PyTorch tensors between CPU and NVMe☆125Nov 27, 2024Updated last year
- ☆251Jul 25, 2024Updated last year
- ☆41Oct 12, 2020Updated 5 years ago