Tabrizian / learning-to-quantizeLinks
Code for "Adaptive Gradient Quantization for Data-Parallel SGD", published in NeurIPS 2020.
☆30Updated 4 years ago
Alternatives and similar repositories for learning-to-quantize
Users that are interested in learning-to-quantize are comparing it to the libraries listed below
Sorting:
- Partial implementation of paper "DEEP GRADIENT COMPRESSION: REDUCING THE COMMUNICATION BANDWIDTH FOR DISTRIBUTED TRAINING"☆31Updated 5 years ago
- Code for the signSGD paper☆90Updated 4 years ago
- vector quantization for stochastic gradient descent.☆35Updated 5 years ago
- ☆46Updated 5 years ago
- Sparsified SGD with Memory: https://arxiv.org/abs/1809.07599☆59Updated 7 years ago
- [ICLR 2018] Deep Gradient Compression: Reducing the Communication Bandwidth for Distributed Training☆225Updated last year
- FedNAS: Federated Deep Learning via Neural Architecture Search☆54Updated 4 years ago
- SGD with compressed gradients and error-feedback: https://arxiv.org/abs/1901.09847☆31Updated last year
- Implementation of (overlap) local SGD in Pytorch☆34Updated 5 years ago
- Decentralized SGD and Consensus with Communication Compression: https://arxiv.org/abs/1907.09356☆72Updated 5 years ago
- Soft Threshold Weight Reparameterization for Learnable Sparsity☆90Updated 2 years ago
- ☆33Updated 5 years ago
- Understanding Top-k Sparsification in Distributed Deep Learning☆24Updated 6 years ago
- ☆133Updated 2 years ago
- Model compression by constrained optimization, using the Learning-Compression (LC) algorithm☆72Updated 3 years ago
- [Neurips 2021] Sparse Training via Boosting Pruning Plasticity with Neuroregeneration☆31Updated 2 years ago
- Pytorch implementation of the paper "SNIP: Single-shot Network Pruning based on Connection Sensitivity" by Lee et al.☆111Updated 6 years ago
- Code for "Picking Winning Tickets Before Training by Preserving Gradient Flow" https://openreview.net/pdf?id=SkgsACVKPH☆105Updated 5 years ago
- It is implementation of Research paper "DEEP GRADIENT COMPRESSION: REDUCING THE COMMUNICATION BANDWIDTH FOR DISTRIBUTED TRAINING". Deep g…☆18Updated 6 years ago
- Implementation of Compressed SGD with Compressed Gradients in Pytorch☆13Updated last year
- Practical low-rank gradient compression for distributed optimization: https://arxiv.org/abs/1905.13727☆149Updated last year
- AN EFFICIENT AND GENERAL FRAMEWORK FOR LAYERWISE-ADAPTIVE GRADIENT COMPRESSION☆14Updated 2 years ago
- [IJCAI'22 Survey] Recent Advances on Neural Network Pruning at Initialization.☆59Updated 2 years ago
- Federated Dynamic Sparse Training☆32Updated 3 years ago
- gTop-k S-SGD: A Communication-Efficient Distributed Synchronous SGD Algorithm for Deep Learning☆36Updated 6 years ago
- Algorithm: Decentralized Parallel Stochastic Gradient Descent☆46Updated 7 years ago
- Any-Precision Deep Neural Networks (AAAI 2021)☆61Updated 5 years ago
- Stochastic Gradient Push for Distributed Deep Learning☆169Updated 2 years ago
- ☆22Updated 2 years ago
- ProxQuant: Quantized Neural Networks via Proximal Operators☆30Updated 6 years ago