epfml / LocalSGD-Code
☆45Updated 4 years ago
Alternatives and similar repositories for LocalSGD-Code:
Users that are interested in LocalSGD-Code are comparing it to the libraries listed below
- Implementation of (overlap) local SGD in Pytorch☆33Updated 4 years ago
- Code for the signSGD paper☆83Updated 4 years ago
- Sparsified SGD with Memory: https://arxiv.org/abs/1809.07599☆58Updated 6 years ago
- ☆74Updated 5 years ago
- PipeTransformer: Automated Elastic Pipelining for Distributed Training of Large-scale Models. ICML 2021☆55Updated 3 years ago
- FedNAS: Federated Deep Learning via Neural Architecture Search☆53Updated 3 years ago
- Code for "Adaptive Gradient Quantization for Data-Parallel SGD", published in NeurIPS 2020.☆29Updated 4 years ago
- Practical low-rank gradient compression for distributed optimization: https://arxiv.org/abs/1905.13727☆146Updated 3 months ago
- vector quantization for stochastic gradient descent.☆33Updated 4 years ago
- Atomo: Communication-efficient Learning via Atomic Sparsification☆25Updated 6 years ago
- Partial implementation of paper "DEEP GRADIENT COMPRESSION: REDUCING THE COMMUNICATION BANDWIDTH FOR DISTRIBUTED TRAINING"☆31Updated 4 years ago
- SGD with compressed gradients and error-feedback: https://arxiv.org/abs/1901.09847☆31Updated 6 months ago
- Sketched SGD☆28Updated 4 years ago
- Decentralized SGD and Consensus with Communication Compression: https://arxiv.org/abs/1907.09356☆66Updated 4 years ago
- A compressed adaptive optimizer for training large-scale deep learning models using PyTorch☆27Updated 5 years ago
- Code for "Picking Winning Tickets Before Training by Preserving Gradient Flow" https://openreview.net/pdf?id=SkgsACVKPH☆101Updated 5 years ago
- [ICLR 2018] Deep Gradient Compression: Reducing the Communication Bandwidth for Distributed Training☆218Updated 7 months ago
- Soft Threshold Weight Reparameterization for Learnable Sparsity☆87Updated 2 years ago
- Accuracy 77%. Large batch deep learning optimizer LARS for ImageNet with PyTorch and ResNet, using Horovod for distribution. Optional acc…☆38Updated 3 years ago
- Code for paper "SWALP: Stochastic Weight Averaging forLow-Precision Training".☆62Updated 5 years ago
- [Neurips 2021] Sparse Training via Boosting Pruning Plasticity with Neuroregeneration☆31Updated 2 years ago
- [ICML 2021] "Do We Actually Need Dense Over-Parameterization? In-Time Over-Parameterization in Sparse Training" by Shiwei Liu, Lu Yin, De…☆46Updated last year
- Code accompanying the NeurIPS 2020 paper: WoodFisher (Singh & Alistarh, 2020)☆48Updated 3 years ago
- Code for Sanity-Checking Pruning Methods: Random Tickets can Win the Jackpot☆42Updated 4 years ago
- Implementation of Continuous Sparsification, a method for pruning and ticket search in deep networks☆33Updated 2 years ago
- QSGD-TF☆21Updated 5 years ago
- Stochastic Gradient Push for Distributed Deep Learning☆160Updated last year
- Reproducing RigL (ICML 2020) as a part of ML Reproducibility Challenge 2020☆28Updated 3 years ago
- ☆28Updated 5 years ago
- GRACE - GRAdient ComprEssion for distributed deep learning☆140Updated 6 months ago