PipeTransformer: Automated Elastic Pipelining for Distributed Training of Large-scale Models. ICML 2021
☆56Jul 21, 2021Updated 4 years ago
Alternatives and similar repositories for PipeTransformer
Users that are interested in PipeTransformer are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- ☆22Nov 20, 2020Updated 5 years ago
- ☆14Feb 1, 2021Updated 5 years ago
- Artifact repository for paper Automatic Generation of High-Performance Quantized Machine Learning Kernels☆17Oct 13, 2020Updated 5 years ago
- Artifact for 'Register Optimizations for Stencils on GPUs'☆10Sep 18, 2018Updated 7 years ago
- Chimera: bidirectional pipeline parallelism for efficiently training large-scale models.☆70Mar 20, 2025Updated last year
- GPU virtual machines on DigitalOcean Gradient AI • AdGet to production fast with high-performance AMD and NVIDIA GPUs you can spin up in seconds. The definition of operational simplicity.
- ☆42Sep 8, 2023Updated 2 years ago
- ☆393Nov 4, 2022Updated 3 years ago
- Low-variance and unbiased gradient for backpropagation through categorical random variables, with application in variational auto-encoder…☆17Jul 1, 2020Updated 5 years ago
- ☆49Apr 11, 2025Updated 11 months ago
- FTPipe and related pipeline model parallelism research.☆44May 16, 2023Updated 2 years ago
- An Efficient Pipelined Data Parallel Approach for Training Large Model☆76Dec 11, 2020Updated 5 years ago
- ☆25Apr 3, 2023Updated 3 years ago
- ☆251Jul 25, 2024Updated last year
- ComScribe is a tool to identify communication among all GPU-GPU and CPU-GPU pairs in a single-node multi-GPU system.☆27Jul 6, 2023Updated 2 years ago
- DigitalOcean Gradient AI Platform • AdBuild production-ready AI agents using customizable tools or access multiple LLMs through a single endpoint. Create custom knowledge bases or connect external data.
- ☆57Jan 25, 2021Updated 5 years ago
- Research and development for optimizing transformers☆131Feb 16, 2021Updated 5 years ago
- Code for paper "ElasticTrainer: Speeding Up On-Device Training with Runtime Elastic Tensor Selection" (MobiSys'23)☆14Nov 1, 2023Updated 2 years ago
- Artifact for "Apparate: Rethinking Early Exits to Tame Latency-Throughput Tensions in ML Serving" [SOSP '24]☆24Nov 21, 2024Updated last year
- Resource-adaptive cluster scheduler for deep learning training.☆457Mar 5, 2023Updated 3 years ago
- Training neural networks in TensorFlow 2.0 with 5x less memory☆137Feb 21, 2022Updated 4 years ago
- ☆145Jan 30, 2025Updated last year
- Benchmark PyTorch Custom Operators☆14Jul 6, 2023Updated 2 years ago
- (NeurIPS 2022) Automatically finding good model-parallel strategies, especially for complex models and clusters.☆44Nov 4, 2022Updated 3 years ago
- DigitalOcean Gradient AI Platform • AdBuild production-ready AI agents using customizable tools or access multiple LLMs through a single endpoint. Create custom knowledge bases or connect external data.
- Bagua Speeds up PyTorch☆883Aug 1, 2024Updated last year
- ☆14Aug 3, 2024Updated last year
- ☆102Jan 17, 2024Updated 2 years ago
- NAACL '24 (Best Demo Paper RunnerUp) / MlSys @ NeurIPS '23 - RedCoast: A Lightweight Tool to Automate Distributed Training and Inference☆69Dec 9, 2024Updated last year
- A Cluster-Wide Model Manager to Accelerate DNN Training via Automated Training Warmup☆36Jan 9, 2023Updated 3 years ago
- ☆23Jun 5, 2019Updated 6 years ago
- ☆13Jan 23, 2021Updated 5 years ago
- RaNNC is an automatic parallelization middleware used to train very large-scale neural networks.☆57Oct 15, 2022Updated 3 years ago
- ☆11May 19, 2025Updated 10 months ago
- Wordpress hosting with auto-scaling on Cloudways • AdFully Managed hosting built for WordPress-powered businesses that need reliable, auto-scalable hosting. Cloudways SafeUpdates now available.
- [ICLR 2022] "PipeGCN: Efficient Full-Graph Training of Graph Convolutional Networks with Pipelined Feature Communication" by Cheng Wan, Y…☆34Mar 15, 2023Updated 3 years ago
- Official codebase of the "Rehearsal revealed:The limits and merits of revisiting samples in continual learning" paper.☆29Oct 20, 2021Updated 4 years ago
- Resource Efficient Federated Learning☆24Jan 13, 2023Updated 3 years ago
- ☆16May 4, 2021Updated 4 years ago
- Experiments evaluating preemption on the NVIDIA Pascal architecture☆16Nov 10, 2016Updated 9 years ago
- GPU Code optimizer for stencil computations. Refer to our IPDPS'19 paper for more details☆25Sep 27, 2019Updated 6 years ago
- Lightweight and Parallel Deep Learning Framework☆263Nov 26, 2022Updated 3 years ago