epfml / LocalSGD-Code
☆46Updated 5 years ago
Alternatives and similar repositories for LocalSGD-Code:
Users that are interested in LocalSGD-Code are comparing it to the libraries listed below
- Implementation of (overlap) local SGD in Pytorch☆33Updated 4 years ago
- Code for the signSGD paper☆83Updated 4 years ago
- Sparsified SGD with Memory: https://arxiv.org/abs/1809.07599☆59Updated 6 years ago
- FedNAS: Federated Deep Learning via Neural Architecture Search☆54Updated 3 years ago
- Partial implementation of paper "DEEP GRADIENT COMPRESSION: REDUCING THE COMMUNICATION BANDWIDTH FOR DISTRIBUTED TRAINING"☆31Updated 4 years ago
- Practical low-rank gradient compression for distributed optimization: https://arxiv.org/abs/1905.13727☆146Updated 4 months ago
- ☆74Updated 5 years ago
- vector quantization for stochastic gradient descent.☆33Updated 4 years ago
- Atomo: Communication-efficient Learning via Atomic Sparsification☆25Updated 6 years ago
- Sketched SGD☆28Updated 4 years ago
- Code for "Picking Winning Tickets Before Training by Preserving Gradient Flow" https://openreview.net/pdf?id=SkgsACVKPH☆101Updated 5 years ago
- Code for "Adaptive Gradient Quantization for Data-Parallel SGD", published in NeurIPS 2020.☆30Updated 4 years ago
- SGD with compressed gradients and error-feedback: https://arxiv.org/abs/1901.09847☆31Updated 7 months ago
- Decentralized SGD and Consensus with Communication Compression: https://arxiv.org/abs/1907.09356☆66Updated 4 years ago
- Pytorch implementation of the paper "SNIP: Single-shot Network Pruning based on Connection Sensitivity" by Lee et al.☆107Updated 5 years ago
- [ICLR-2020] Dynamic Sparse Training: Find Efficient Sparse Network From Scratch With Trainable Masked Layers.☆31Updated 5 years ago
- [ICLR 2018] Deep Gradient Compression: Reducing the Communication Bandwidth for Distributed Training☆219Updated 8 months ago
- Soft Threshold Weight Reparameterization for Learnable Sparsity☆87Updated 2 years ago
- SNIP: SINGLE-SHOT NETWORK PRUNING BASED ON CONNECTION SENSITIVITY☆113Updated 5 years ago
- Implementation of Continuous Sparsification, a method for pruning and ticket search in deep networks☆33Updated 2 years ago
- Code accompanying the NeurIPS 2020 paper: WoodFisher (Singh & Alistarh, 2020)☆48Updated 4 years ago
- GRACE - GRAdient ComprEssion for distributed deep learning☆139Updated 7 months ago
- ☆28Updated 5 years ago
- PipeTransformer: Automated Elastic Pipelining for Distributed Training of Large-scale Models. ICML 2021☆56Updated 3 years ago
- Understanding Top-k Sparsification in Distributed Deep Learning☆24Updated 5 years ago
- [NeurIPS‘2021] "MEST: Accurate and Fast Memory-Economic Sparse Training Framework on the Edge", Geng Yuan, Xiaolong Ma, Yanzhi Wang et al…☆18Updated 2 years ago
- [ICML 2021] "Do We Actually Need Dense Over-Parameterization? In-Time Over-Parameterization in Sparse Training" by Shiwei Liu, Lu Yin, De…☆46Updated last year
- Source code of ICLR2020 submisstion: Zeno++: Robust Fully Asynchronous SGD☆13Updated 5 years ago
- Reproduction and analysis of SNIP paper☆30Updated 5 years ago
- Federated Dynamic Sparse Training☆30Updated 2 years ago