epfml / sparsifiedSGDLinks
Sparsified SGD with Memory: https://arxiv.org/abs/1809.07599
☆58Updated 6 years ago
Alternatives and similar repositories for sparsifiedSGD
Users that are interested in sparsifiedSGD are comparing it to the libraries listed below
Sorting:
- Code for the signSGD paper☆86Updated 4 years ago
- vector quantization for stochastic gradient descent.☆35Updated 5 years ago
- SGD with compressed gradients and error-feedback: https://arxiv.org/abs/1901.09847☆30Updated 10 months ago
- ☆46Updated 5 years ago
- Partial implementation of paper "DEEP GRADIENT COMPRESSION: REDUCING THE COMMUNICATION BANDWIDTH FOR DISTRIBUTED TRAINING"☆31Updated 4 years ago
- Implementation of (overlap) local SGD in Pytorch☆33Updated 4 years ago
- ☆74Updated 5 years ago
- Atomo: Communication-efficient Learning via Atomic Sparsification☆27Updated 6 years ago
- implement distributed machine learning with Pytorch + OpenMPI☆51Updated 6 years ago
- Understanding Top-k Sparsification in Distributed Deep Learning☆24Updated 5 years ago
- Sketched SGD☆28Updated 4 years ago
- QSGD-TF☆21Updated 6 years ago
- Decentralized SGD and Consensus with Communication Compression: https://arxiv.org/abs/1907.09356☆70Updated 4 years ago
- FedNAS: Federated Deep Learning via Neural Architecture Search☆54Updated 3 years ago
- gTop-k S-SGD: A Communication-Efficient Distributed Synchronous SGD Algorithm for Deep Learning☆36Updated 5 years ago
- A compressed adaptive optimizer for training large-scale deep learning models using PyTorch☆27Updated 5 years ago
- Simple Hierarchical Count Sketch in Python☆20Updated 4 years ago
- ☆33Updated 5 years ago
- Code for "Adaptive Gradient Quantization for Data-Parallel SGD", published in NeurIPS 2020.☆30Updated 4 years ago
- Code for paper: Variance Reduced Local SGD with Lower Communication Complexity☆12Updated 5 years ago
- Source code of ICLR2020 submisstion: Zeno++: Robust Fully Asynchronous SGD☆13Updated 5 years ago
- Algorithm: Decentralized Parallel Stochastic Gradient Descent☆44Updated 6 years ago
- Dual-way gradient sparsification approach for async DNN training, based on PyTorch.☆11Updated 2 years ago
- Implementation of Parameter Server using PyTorch communication lib☆42Updated 6 years ago
- [ICLR 2018] Deep Gradient Compression: Reducing the Communication Bandwidth for Distributed Training☆222Updated 10 months ago
- Salvaging Federated Learning by Local Adaptation☆56Updated 10 months ago
- Practical low-rank gradient compression for distributed optimization: https://arxiv.org/abs/1905.13727☆146Updated 7 months ago
- DRACO: Byzantine-resilient Distributed Training via Redundant Gradients☆23Updated 6 years ago
- Federated Multi-Task Learning☆130Updated 6 years ago
- Stochastic Gradient Push for Distributed Deep Learning☆168Updated 2 years ago