sands-lab / grace
GRACE - GRAdient ComprEssion for distributed deep learning
☆139Updated 9 months ago
Alternatives and similar repositories for grace:
Users that are interested in grace are comparing it to the libraries listed below
- [ICLR 2018] Deep Gradient Compression: Reducing the Communication Bandwidth for Distributed Training☆221Updated 9 months ago
- gTop-k S-SGD: A Communication-Efficient Distributed Synchronous SGD Algorithm for Deep Learning☆36Updated 5 years ago
- Practical low-rank gradient compression for distributed optimization: https://arxiv.org/abs/1905.13727☆146Updated 6 months ago
- Partial implementation of paper "DEEP GRADIENT COMPRESSION: REDUCING THE COMMUNICATION BANDWIDTH FOR DISTRIBUTED TRAINING"☆31Updated 4 years ago
- QSGD-TF☆21Updated 5 years ago
- Understanding Top-k Sparsification in Distributed Deep Learning☆24Updated 5 years ago
- Atomo: Communication-efficient Learning via Atomic Sparsification☆25Updated 6 years ago
- Implementation of Parameter Server using PyTorch communication lib☆42Updated 6 years ago
- ddl-benchmarks: Benchmarks for Distributed Deep Learning☆37Updated 4 years ago
- Stochastic Gradient Push for Distributed Deep Learning☆164Updated 2 years ago
- PipeSwitch: Fast Pipelined Context Switching for Deep Learning Applications☆126Updated 2 years ago
- Ok-Topk is a scheme for distributed training with sparse gradients. Ok-Topk integrates a novel sparse allreduce algorithm (less than 6k c…☆25Updated 2 years ago
- ☆99Updated last year
- It is implementation of Research paper "DEEP GRADIENT COMPRESSION: REDUCING THE COMMUNICATION BANDWIDTH FOR DISTRIBUTED TRAINING". Deep g…☆18Updated 5 years ago
- A Cluster-Wide Model Manager to Accelerate DNN Training via Automated Training Warmup☆34Updated 2 years ago
- Code for reproducing experiments performed for Accoridon☆13Updated 3 years ago
- Dual-way gradient sparsification approach for async DNN training, based on PyTorch.☆11Updated 2 years ago
- Implementation of (overlap) local SGD in Pytorch☆33Updated 4 years ago
- AlpaServe: Statistical Multiplexing with Model Parallelism for Deep Learning Serving (OSDI 23)☆81Updated last year
- ☆46Updated 5 years ago
- Sparsified SGD with Memory: https://arxiv.org/abs/1809.07599☆58Updated 6 years ago
- Sketched SGD☆28Updated 4 years ago
- Layer-wise Sparsification of Distributed Deep Learning☆10Updated 4 years ago
- An Efficient Pipelined Data Parallel Approach for Training Large Model☆76Updated 4 years ago
- Code for "Adaptive Gradient Quantization for Data-Parallel SGD", published in NeurIPS 2020.☆30Updated 4 years ago
- Oort: Efficient Federated Learning via Guided Participant Selection☆126Updated 3 years ago
- MG-WFBP: Merging Gradients Wisely for Efficient Communication in Distributed Deep Learning☆11Updated 4 years ago
- PipeTransformer: Automated Elastic Pipelining for Distributed Training of Large-scale Models. ICML 2021☆56Updated 3 years ago
- Hi-Speed DNN Training with Espresso: Unleashing the Full Potential of Gradient Compression with Near-Optimal Usage Strategies (EuroSys '2…☆15Updated last year
- Code for the signSGD paper☆84Updated 4 years ago