An Efficient Pipelined Data Parallel Approach for Training Large Model
☆75Dec 11, 2020Updated 5 years ago
Alternatives and similar repositories for DAPPLE
Users that are interested in DAPPLE are comparing it to the libraries listed below
Sorting:
- ☆392Nov 4, 2022Updated 3 years ago
- Chimera: bidirectional pipeline parallelism for efficiently training large-scale models.☆70Mar 20, 2025Updated last year
- An experimental parallel training platform☆56Mar 25, 2024Updated last year
- (NeurIPS 2022) Automatically finding good model-parallel strategies, especially for complex models and clusters.☆44Nov 4, 2022Updated 3 years ago
- Bamboo is a system for running large pipeline-parallel DNNs affordably, reliably, and efficiently using spot instances.☆55Dec 11, 2022Updated 3 years ago
- FTPipe and related pipeline model parallelism research.☆44May 16, 2023Updated 2 years ago
- ☆52Dec 13, 2022Updated 3 years ago
- ☆78May 4, 2021Updated 4 years ago
- Official resporitory for "IPDPS' 24 QSync: Quantization-Minimized Synchronous Distributed Training Across Hybrid Devices".☆20Feb 23, 2024Updated 2 years ago
- Easy Parallel Library (EPL) is a general and efficient deep learning framework for distributed model training.☆271Mar 31, 2023Updated 2 years ago
- GPU-scheduler-for-deep-learning☆209Nov 5, 2020Updated 5 years ago
- PipeTransformer: Automated Elastic Pipelining for Distributed Training of Large-scale Models. ICML 2021☆56Jul 21, 2021Updated 4 years ago
- Synthesizer for optimal collective communication algorithms☆123Apr 8, 2024Updated last year
- Efficient-Tensor-Management-on-HM-for-Deep-Learning☆10Nov 15, 2021Updated 4 years ago
- ☆10Aug 4, 2020Updated 5 years ago
- Adaptive Message Quantization and Parallelization for Distributed Full-graph GNN Training☆24Mar 1, 2024Updated 2 years ago
- Artifacts for our ASPLOS'23 paper ElasticFlow☆56May 10, 2024Updated last year
- Training neural networks in TensorFlow 2.0 with 5x less memory☆137Feb 21, 2022Updated 4 years ago
- Artifact for 'Register Optimizations for Stencils on GPUs'☆10Sep 18, 2018Updated 7 years ago
- nnScaler: Compiling DNN models for Parallel Training☆126Sep 23, 2025Updated 5 months ago
- ☆84Feb 11, 2026Updated last month
- Compiler for Dynamic Neural Networks☆45Nov 13, 2023Updated 2 years ago
- PipeSwitch: Fast Pipelined Context Switching for Deep Learning Applications☆127May 9, 2022Updated 3 years ago
- ☆27Aug 31, 2023Updated 2 years ago
- ☆26Dec 5, 2022Updated 3 years ago
- AI model training on heterogeneous, geo-distributed resources☆39Nov 24, 2025Updated 3 months ago
- ☆42Sep 8, 2023Updated 2 years ago
- Official repository for the paper DynaPipe: Optimizing Multi-task Training through Dynamic Pipelines☆19Dec 8, 2023Updated 2 years ago
- Zero Bubble Pipeline Parallelism☆451May 7, 2025Updated 10 months ago
- Automatically Discovering Fast Parallelization Strategies for Distributed Deep Neural Network Training☆1,864Updated this week
- ☆12May 3, 2020Updated 5 years ago
- Microsoft Collective Communication Library☆387Sep 20, 2023Updated 2 years ago
- Hi-Speed DNN Training with Espresso: Unleashing the Full Potential of Gradient Compression with Near-Optimal Usage Strategies (EuroSys '2…☆15Sep 21, 2023Updated 2 years ago
- Code for "Heterogenity-Aware Cluster Scheduling Policies for Deep Learning Workloads", which appeared at OSDI 2020☆137Jul 25, 2024Updated last year
- Fine-grained GPU sharing primitives☆147Jul 28, 2025Updated 7 months ago
- Yet another Polyhedra Compiler for DeepLearning☆19Apr 14, 2023Updated 2 years ago
- A schedule language for large model training☆152Aug 21, 2025Updated 7 months ago
- SQL Optimizations using MLIR☆12Apr 5, 2020Updated 5 years ago
- Artifact repository for paper Automatic Generation of High-Performance Quantized Machine Learning Kernels☆17Oct 13, 2020Updated 5 years ago