An Efficient Pipelined Data Parallel Approach for Training Large Model
☆76Dec 11, 2020Updated 5 years ago
Alternatives and similar repositories for DAPPLE
Users that are interested in DAPPLE are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- ☆393Nov 4, 2022Updated 3 years ago
- Chimera: bidirectional pipeline parallelism for efficiently training large-scale models.☆70Mar 20, 2025Updated last year
- An experimental parallel training platform☆56Mar 25, 2024Updated 2 years ago
- (NeurIPS 2022) Automatically finding good model-parallel strategies, especially for complex models and clusters.☆44Nov 4, 2022Updated 3 years ago
- Bamboo is a system for running large pipeline-parallel DNNs affordably, reliably, and efficiently using spot instances.☆55Dec 11, 2022Updated 3 years ago
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click and start building anything your business needs.
- FTPipe and related pipeline model parallelism research.☆44May 16, 2023Updated 2 years ago
- ☆52Dec 13, 2022Updated 3 years ago
- ☆78May 4, 2021Updated 4 years ago
- Official resporitory for "IPDPS' 24 QSync: Quantization-Minimized Synchronous Distributed Training Across Hybrid Devices".☆20Feb 23, 2024Updated 2 years ago
- Easy Parallel Library (EPL) is a general and efficient deep learning framework for distributed model training.☆272Mar 31, 2023Updated 3 years ago
- A GPipe implementation in PyTorch☆862Jul 25, 2024Updated last year
- GPU-scheduler-for-deep-learning☆209Nov 5, 2020Updated 5 years ago
- PipeTransformer: Automated Elastic Pipelining for Distributed Training of Large-scale Models. ICML 2021☆56Jul 21, 2021Updated 4 years ago
- Synthesizer for optimal collective communication algorithms☆123Apr 8, 2024Updated 2 years ago
- DigitalOcean Gradient AI Platform • AdBuild production-ready AI agents using customizable tools or access multiple LLMs through a single endpoint. Create custom knowledge bases or connect external data.
- Efficient-Tensor-Management-on-HM-for-Deep-Learning☆10Nov 15, 2021Updated 4 years ago
- Adaptive Message Quantization and Parallelization for Distributed Full-graph GNN Training☆24Mar 1, 2024Updated 2 years ago
- ☆10Aug 4, 2020Updated 5 years ago
- Artifacts for our ASPLOS'23 paper ElasticFlow☆56May 10, 2024Updated last year
- Training neural networks in TensorFlow 2.0 with 5x less memory☆137Feb 21, 2022Updated 4 years ago
- Artifact for 'Register Optimizations for Stencils on GPUs'☆10Sep 18, 2018Updated 7 years ago
- nnScaler: Compiling DNN models for Parallel Training☆126Updated this week
- ☆84Feb 11, 2026Updated 2 months ago
- ☆41Oct 12, 2020Updated 5 years ago
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click and start building anything your business needs.
- PipeSwitch: Fast Pipelined Context Switching for Deep Learning Applications☆127May 9, 2022Updated 3 years ago
- Compiler for Dynamic Neural Networks☆45Nov 13, 2023Updated 2 years ago
- ☆27Aug 31, 2023Updated 2 years ago
- ☆26Dec 5, 2022Updated 3 years ago
- ☆42Sep 8, 2023Updated 2 years ago
- Zero Bubble Pipeline Parallelism☆452May 7, 2025Updated 11 months ago
- Automatically Discovering Fast Parallelization Strategies for Distributed Deep Neural Network Training☆1,870Mar 25, 2026Updated 2 weeks ago