Chimera: bidirectional pipeline parallelism for efficiently training large-scale models.
☆70Mar 20, 2025Updated 11 months ago
Alternatives and similar repositories for Chimera
Users that are interested in Chimera are comparing it to the libraries listed below
Sorting:
- Bamboo is a system for running large pipeline-parallel DNNs affordably, reliably, and efficiently using spot instances.☆55Dec 11, 2022Updated 3 years ago
- ☆82Feb 11, 2026Updated 2 weeks ago
- An Efficient Pipelined Data Parallel Approach for Training Large Model☆75Dec 11, 2020Updated 5 years ago
- Zero Bubble Pipeline Parallelism☆451May 7, 2025Updated 9 months ago
- Ok-Topk is a scheme for distributed training with sparse gradients. Ok-Topk integrates a novel sparse allreduce algorithm (less than 6k c…☆27Dec 10, 2022Updated 3 years ago
- ☆10Apr 29, 2023Updated 2 years ago
- Hi-Speed DNN Training with Espresso: Unleashing the Full Potential of Gradient Compression with Near-Optimal Usage Strategies (EuroSys '2…☆15Sep 21, 2023Updated 2 years ago
- Automated Parallelization System and Infrastructure for Multiple Ecosystems☆82Nov 19, 2024Updated last year
- ☆22Apr 22, 2024Updated last year
- Official resporitory for "IPDPS' 24 QSync: Quantization-Minimized Synchronous Distributed Training Across Hybrid Devices".☆20Feb 23, 2024Updated 2 years ago
- ☆17Dec 9, 2022Updated 3 years ago
- An experimental parallel training platform☆56Mar 25, 2024Updated last year
- ☆78May 4, 2021Updated 4 years ago
- PipeTransformer: Automated Elastic Pipelining for Distributed Training of Large-scale Models. ICML 2021☆56Jul 21, 2021Updated 4 years ago
- RaNNC is an automatic parallelization middleware used to train very large-scale neural networks.☆57Oct 15, 2022Updated 3 years ago
- ☆392Nov 4, 2022Updated 3 years ago
- FTPipe and related pipeline model parallelism research.☆44May 16, 2023Updated 2 years ago
- DELTA-pytorch:DELTA: Dynamically Optimizing GPU Memory beyond Tensor Recomputation☆12Apr 16, 2024Updated last year
- ☆48Aug 6, 2024Updated last year
- nnScaler: Compiling DNN models for Parallel Training☆124Sep 23, 2025Updated 5 months ago
- ☆10Jun 28, 2025Updated 8 months ago
- ☆47Dec 13, 2024Updated last year
- A tool for cross-checking Verilog compilers☆14Apr 16, 2025Updated 10 months ago
- Accommodating Large Language Model Training over Heterogeneous Environment.☆25Mar 13, 2025Updated 11 months ago
- ☆26Aug 31, 2023Updated 2 years ago
- ☆44Sep 6, 2021Updated 4 years ago
- The (open-source part of) code to reproduce "BPPSA: Scaling Back-propagation by Parallel Scan Algorithm".☆13Jun 7, 2021Updated 4 years ago
- Deft: A Scalable Tree Index for Disaggregated Memory☆23Apr 23, 2025Updated 10 months ago
- TACCL: Guiding Collective Algorithm Synthesis using Communication Sketches☆80Jul 25, 2023Updated 2 years ago
- ☆28Jul 11, 2021Updated 4 years ago
- ☆251Jul 25, 2024Updated last year
- Synthesizer for optimal collective communication algorithms☆124Apr 8, 2024Updated last year
- A computation-parallel deep learning architecture.☆13Sep 25, 2019Updated 6 years ago
- A baseline repository of Auto-Parallelism in Training Neural Networks☆147Jun 25, 2022Updated 3 years ago
- Take your first step in writing a compiler. Implemented in Rust.☆16Apr 17, 2023Updated 2 years ago
- Repository for the COLM 2025 paper SpecDec++: Boosting Speculative Decoding via Adaptive Candidate Lengths☆15Jul 10, 2025Updated 7 months ago
- Official implementation for the paper Lancet: Accelerating Mixture-of-Experts Training via Whole Graph Computation-Communication Overlapp…☆14Nov 17, 2025Updated 3 months ago
- FlashSparse significantly reduces the computation redundancy for unstructured sparsity (for SpMM and SDDMM) on Tensor Cores through a Swa…☆39Oct 5, 2025Updated 4 months ago
- Ongoing research training transformer models at scale☆18Updated this week