ganshaoduo / QSGD-TF
QSGD-TF
☆21Updated 5 years ago
Alternatives and similar repositories for QSGD-TF:
Users that are interested in QSGD-TF are comparing it to the libraries listed below
- Atomo: Communication-efficient Learning via Atomic Sparsification☆25Updated 6 years ago
- gTop-k S-SGD: A Communication-Efficient Distributed Synchronous SGD Algorithm for Deep Learning☆36Updated 5 years ago
- Sparsified SGD with Memory: https://arxiv.org/abs/1809.07599☆58Updated 6 years ago
- GRACE - GRAdient ComprEssion for distributed deep learning☆140Updated 6 months ago
- Implementation of (overlap) local SGD in Pytorch☆33Updated 4 years ago
- Code for the signSGD paper☆83Updated 4 years ago
- Stochastic Gradient Push for Distributed Deep Learning☆160Updated last year
- [ICLR 2018] Deep Gradient Compression: Reducing the Communication Bandwidth for Distributed Training☆218Updated 7 months ago
- Layer-wise Sparsification of Distributed Deep Learning☆10Updated 4 years ago
- SGD with compressed gradients and error-feedback: https://arxiv.org/abs/1901.09847☆31Updated 6 months ago
- Ternary Gradients to Reduce Communication in Distributed Deep Learning (TensorFlow)☆183Updated 6 years ago
- Practical low-rank gradient compression for distributed optimization: https://arxiv.org/abs/1905.13727☆146Updated 3 months ago
- Algorithm: Decentralized Parallel Stochastic Gradient Descent☆41Updated 6 years ago
- ☆45Updated 4 years ago
- Sketched SGD☆28Updated 4 years ago
- ☆74Updated 5 years ago
- Partial implementation of paper "DEEP GRADIENT COMPRESSION: REDUCING THE COMMUNICATION BANDWIDTH FOR DISTRIBUTED TRAINING"☆31Updated 4 years ago
- Understanding Top-k Sparsification in Distributed Deep Learning☆24Updated 5 years ago
- implement distributed machine learning with Pytorch + OpenMPI☆51Updated 5 years ago
- It is implementation of Research paper "DEEP GRADIENT COMPRESSION: REDUCING THE COMMUNICATION BANDWIDTH FOR DISTRIBUTED TRAINING". Deep g…☆18Updated 5 years ago
- vector quantization for stochastic gradient descent.☆33Updated 4 years ago
- Implementation of Parameter Server using PyTorch communication lib☆43Updated 5 years ago
- Dual-way gradient sparsification approach for async DNN training, based on PyTorch.☆11Updated 2 years ago
- ddl-benchmarks: Benchmarks for Distributed Deep Learning☆37Updated 4 years ago
- A compressed adaptive optimizer for training large-scale deep learning models using PyTorch☆27Updated 5 years ago
- Decentralized SGD and Consensus with Communication Compression: https://arxiv.org/abs/1907.09356☆66Updated 4 years ago
- FedNAS: Federated Deep Learning via Neural Architecture Search☆53Updated 3 years ago
- MG-WFBP: Merging Gradients Wisely for Efficient Communication in Distributed Deep Learning☆12Updated 3 years ago
- Ok-Topk is a scheme for distributed training with sparse gradients. Ok-Topk integrates a novel sparse allreduce algorithm (less than 6k c…☆24Updated 2 years ago
- Code for "Adaptive Gradient Quantization for Data-Parallel SGD", published in NeurIPS 2020.☆29Updated 4 years ago