vineeths96 / Gradient-CompressionLinks
We present a set of all-reduce compatible gradient compression algorithms which significantly reduce the communication overhead while maintaining the performance of vanilla SGD. We empirically evaluate the performance of the compression methods by training deep neural networks on the CIFAR10 dataset.
☆10Updated 3 years ago
Alternatives and similar repositories for Gradient-Compression
Users that are interested in Gradient-Compression are comparing it to the libraries listed below
Sorting:
- Ok-Topk is a scheme for distributed training with sparse gradients. Ok-Topk integrates a novel sparse allreduce algorithm (less than 6k c…☆26Updated 2 years ago
- ☆10Updated 4 years ago
- Hi-Speed DNN Training with Espresso: Unleashing the Full Potential of Gradient Compression with Near-Optimal Usage Strategies (EuroSys '2…☆15Updated last year
- [ICDCS 2023] DeAR: Accelerating Distributed Deep Learning with Fine-Grained All-Reduce Pipelining☆12Updated last year
- A Cluster-Wide Model Manager to Accelerate DNN Training via Automated Training Warmup☆35Updated 2 years ago
- [ICDCS 2023] Evaluation and Optimization of Gradient Compression for Distributed Deep Learning☆10Updated 2 years ago
- A Sparse-tensor Communication Framework for Distributed Deep Learning☆13Updated 3 years ago
- Code for reproducing experiments performed for Accoridon☆13Updated 4 years ago
- This is a list of awesome edgeAI inference related papers.☆97Updated last year
- ☆20Updated 3 years ago
- MobiSys#114