PipeTransformer: Automated Elastic Pipelining for Distributed Training of Large-scale Models. ICML 2021
☆56Jul 21, 2021Updated 4 years ago
Alternatives and similar repositories for PipeTransformer
Users that are interested in PipeTransformer are comparing it to the libraries listed below
Sorting:
- ☆14Feb 1, 2021Updated 5 years ago
- Artifact repository for paper Automatic Generation of High-Performance Quantized Machine Learning Kernels☆17Oct 13, 2020Updated 5 years ago
- DELTA-pytorch:DELTA: Dynamically Optimizing GPU Memory beyond Tensor Recomputation☆12Apr 16, 2024Updated last year
- ☆10Aug 4, 2020Updated 5 years ago
- Artifact for 'Register Optimizations for Stencils on GPUs'☆10Sep 18, 2018Updated 7 years ago
- Chimera: bidirectional pipeline parallelism for efficiently training large-scale models.☆70Mar 20, 2025Updated last year
- ☆42Sep 8, 2023Updated 2 years ago
- Low-variance and unbiased gradient for backpropagation through categorical random variables, with application in variational auto-encoder…☆17Jul 1, 2020Updated 5 years ago
- ☆49Apr 11, 2025Updated 11 months ago
- FTPipe and related pipeline model parallelism research.☆44May 16, 2023Updated 2 years ago
- An Efficient Pipelined Data Parallel Approach for Training Large Model☆75Dec 11, 2020Updated 5 years ago
- ☆25Apr 3, 2023Updated 2 years ago
- ComScribe is a tool to identify communication among all GPU-GPU and CPU-GPU pairs in a single-node multi-GPU system.☆27Jul 6, 2023Updated 2 years ago
- ☆56Jan 25, 2021Updated 5 years ago
- Research and development for optimizing transformers☆131Feb 16, 2021Updated 5 years ago
- Artifact for "Apparate: Rethinking Early Exits to Tame Latency-Throughput Tensions in ML Serving" [SOSP '24]☆24Nov 21, 2024Updated last year
- Code for paper "ElasticTrainer: Speeding Up On-Device Training with Runtime Elastic Tensor Selection" (MobiSys'23)☆14Nov 1, 2023Updated 2 years ago
- ☆78May 4, 2021Updated 4 years ago
- Resource-adaptive cluster scheduler for deep learning training.☆453Mar 5, 2023Updated 3 years ago
- Training neural networks in TensorFlow 2.0 with 5x less memory☆137Feb 21, 2022Updated 4 years ago
- ☆145Jan 30, 2025Updated last year
- Benchmark PyTorch Custom Operators☆14Jul 6, 2023Updated 2 years ago
- (NeurIPS 2022) Automatically finding good model-parallel strategies, especially for complex models and clusters.☆44Nov 4, 2022Updated 3 years ago
- ☆28Aug 29, 2022Updated 3 years ago
- ☆102Jan 17, 2024Updated 2 years ago
- NAACL '24 (Best Demo Paper RunnerUp) / MlSys @ NeurIPS '23 - RedCoast: A Lightweight Tool to Automate Distributed Training and Inference☆69Dec 9, 2024Updated last year
- A Cluster-Wide Model Manager to Accelerate DNN Training via Automated Training Warmup☆36Jan 9, 2023Updated 3 years ago
- ☆23Jun 5, 2019Updated 6 years ago
- ☆13Jan 23, 2021Updated 5 years ago
- RaNNC is an automatic parallelization middleware used to train very large-scale neural networks.☆57Oct 15, 2022Updated 3 years ago
- Torch Distributed Experimental☆117Aug 5, 2024Updated last year
- ☆11May 19, 2025Updated 10 months ago
- Multivariate cumulants of any order☆15May 17, 2023Updated 2 years ago
- Model factory is a ML training platform to help engineers to build ML models at scale☆17Sep 27, 2021Updated 4 years ago
- Distributed DRL by Ray and TensorFlow Tutorial.☆10Dec 26, 2019Updated 6 years ago
- Official codebase of the "Rehearsal revealed:The limits and merits of revisiting samples in continual learning" paper.☆29Oct 20, 2021Updated 4 years ago
- Resource Efficient Federated Learning☆24Jan 13, 2023Updated 3 years ago
- Experiments evaluating preemption on the NVIDIA Pascal architecture☆17Nov 10, 2016Updated 9 years ago
- Lightweight and Parallel Deep Learning Framework☆264Nov 26, 2022Updated 3 years ago