yanring / DGSLinks
Dual-way gradient sparsification approach for async DNN training, based on PyTorch.
☆11Updated 2 years ago
Alternatives and similar repositories for DGS
Users that are interested in DGS are comparing it to the libraries listed below
Sorting:
- gTop-k S-SGD: A Communication-Efficient Distributed Synchronous SGD Algorithm for Deep Learning☆36Updated 5 years ago
- Understanding Top-k Sparsification in Distributed Deep Learning☆24Updated 5 years ago
- Source code of ICLR2020 submisstion: Zeno++: Robust Fully Asynchronous SGD☆13Updated 5 years ago
- ☆30Updated 4 years ago
- Partial implementation of paper "DEEP GRADIENT COMPRESSION: REDUCING THE COMMUNICATION BANDWIDTH FOR DISTRIBUTED TRAINING"☆31Updated 4 years ago
- Implementation of Parameter Server using PyTorch communication lib☆42Updated 6 years ago
- Layer-wise Sparsification of Distributed Deep Learning☆10Updated 4 years ago
- Ok-Topk is a scheme for distributed training with sparse gradients. Ok-Topk integrates a novel sparse allreduce algorithm (less than 6k c…☆26Updated 2 years ago
- Sparsified SGD with Memory: https://arxiv.org/abs/1809.07599☆58Updated 6 years ago
- Atomo: Communication-efficient Learning via Atomic Sparsification☆27Updated 6 years ago
- ddl-benchmarks: Benchmarks for Distributed Deep Learning☆37Updated 5 years ago
- Sketched SGD