lzhangbv / dear_pytorchLinks
[ICDCS 2023] DeAR: Accelerating Distributed Deep Learning with Fine-Grained All-Reduce Pipelining
☆11Updated last year
Alternatives and similar repositories for dear_pytorch
Users that are interested in dear_pytorch are comparing it to the libraries listed below
Sorting:
- Official resporitory for "IPDPS' 24 QSync: Quantization-Minimized Synchronous Distributed Training Across Hybrid Devices".☆20Updated last year
- PipeTransformer: Automated Elastic Pipelining for Distributed Training of Large-scale Models. ICML 2021☆56Updated 4 years ago
- ☆25Updated 2 years ago
- ddl-benchmarks: Benchmarks for Distributed Deep Learning☆36Updated 5 years ago
- [IJCAI2023] An automated parallel training system that combines the advantages from both data and model parallelism. If you have any inte…☆52Updated 2 years ago
- (NeurIPS 2022) Automatically finding good model-parallel strategies, especially for complex models and clusters.☆41Updated 2 years ago
- ☆21Updated 3 years ago
- ☆68Updated 2 years ago
- Machine Learning System☆14Updated 5 years ago
- A Cluster-Wide Model Manager to Accelerate DNN Training via Automated Training Warmup☆35Updated 2 years ago
- Hi-Speed DNN Training with Espresso: Unleashing the Full Potential of Gradient Compression with Near-Optimal Usage Strategies (EuroSys '2…☆15Updated 2 years ago
- DISB is a new DNN inference serving benchmark with diverse workloads and models, as well as real-world traces.☆54Updated last year
- An Attention Superoptimizer☆22Updated 8 months ago
- Official Repo for "SplitQuant / LLM-PQ: Resource-Efficient LLM Offline Serving on Heterogeneous GPUs via Phase-Aware Model Partition and …☆34Updated last month
- ☆40Updated 5 years ago
- FTPipe and related pipeline model parallelism research.☆43Updated 2 years ago
- BytePS examples (Vision, NLP, GAN, etc)☆19Updated 2 years ago
- High performance NCCL plugin for Bagua.☆15Updated 4 years ago
- Artifact for "Apparate: Rethinking Early Exits to Tame Latency-Throughput Tensions in ML Serving" [SOSP '24]☆25Updated 10 months ago
- Distributed ML Training Benchmarks☆27Updated 2 years ago
- A Deep Learning Cluster Scheduler☆39Updated 4 years ago
- Supplemental materials for The ASPLOS 2025 / EuroSys 2025 Contest on Intra-Operator Parallelism for Distributed Deep Learning☆23Updated 5 months ago
- ☆75Updated 4 years ago
- ☆22Updated 4 years ago
- ☆15Updated 3 years ago
- A framework for generating realistic LLM serving workloads☆65Updated 2 weeks ago
- A Python library transfers PyTorch tensors between CPU and NVMe☆120Updated 10 months ago
- Chimera: bidirectional pipeline parallelism for efficiently training large-scale models.☆66Updated 6 months ago
- Ok-Topk is a scheme for distributed training with sparse gradients. Ok-Topk integrates a novel sparse allreduce algorithm (less than 6k c…☆27Updated 2 years ago
- ☆72Updated last year