Distributed-AI / PipeTransformer
PipeTransformer: Automated Elastic Pipelining for Distributed Training of Large-scale Models. ICML 2021
☆55Updated 3 years ago
Alternatives and similar repositories for PipeTransformer:
Users that are interested in PipeTransformer are comparing it to the libraries listed below
- (NeurIPS 2022) Automatically finding good model-parallel strategies, especially for complex models and clusters.☆37Updated 2 years ago
- [ICDCS 2023] DeAR: Accelerating Distributed Deep Learning with Fine-Grained All-Reduce Pipelining☆12Updated last year
- A Cluster-Wide Model Manager to Accelerate DNN Training via Automated Training Warmup☆34Updated 2 years ago
- ddl-benchmarks: Benchmarks for Distributed Deep Learning☆37Updated 4 years ago
- Fairring (FAIR + Herring) is a plug-in for PyTorch that provides a process group for distributed training that outperforms NCCL at large …☆64Updated 2 years ago
- ☆70Updated 3 years ago
- FTPipe and related pipeline model parallelism research.☆41Updated last year
- [IJCAI2023] An automated parallel training system that combines the advantages from both data and model parallelism. If you have any inte…☆51Updated last year
- ☆75Updated 2 years ago
- [ICML 2024] Serving LLMs on heterogeneous decentralized clusters.☆19Updated 9 months ago
- sensAI: ConvNets Decomposition via Class Parallelism for Fast Inference on Live Data☆64Updated 6 months ago
- ☆92Updated 2 years ago
- PyTorch implementation of paper "Response Length Perception and Sequence Scheduling: An LLM-Empowered LLM Inference Pipeline".☆81Updated last year
- Distributed ML Training Benchmarks☆27Updated last year
- ☆50Updated 8 months ago
- An external memory allocator example for PyTorch.☆14Updated 3 years ago
- Research and development for optimizing transformers☆125Updated 3 years ago
- ☆79Updated 2 months ago
- Code associated with the paper **Fine-tuning Language Models over Slow Networks using Activation Compression with Guarantees**.☆27Updated last year
- Machine Learning System☆14Updated 4 years ago
- ☆22Updated 4 years ago
- ☆99Updated last year
- ☆16Updated 2 years ago
- nnScaler: Compiling DNN models for Parallel Training☆91Updated this week
- A Python library transfers PyTorch tensors between CPU and NVMe☆103Updated 2 months ago
- A high-performance distributed deep learning system targeting large-scale and automated distributed training. If you have any interests, …☆107Updated last year
- Memory Optimizations for Deep Learning (ICML 2023)☆62Updated 11 months ago
- Chimera: Efficiently Training Large-Scale Neural Networks with Bidirectional Pipelines.☆56Updated last year
- A Deep Learning Cluster Scheduler☆37Updated 4 years ago