epfml / powersgd
Practical low-rank gradient compression for distributed optimization: https://arxiv.org/abs/1905.13727
☆146Updated 6 months ago
Alternatives and similar repositories for powersgd:
Users that are interested in powersgd are comparing it to the libraries listed below
- GRACE - GRAdient ComprEssion for distributed deep learning☆139Updated 9 months ago
- ☆46Updated 5 years ago
- Implementation of (overlap) local SGD in Pytorch☆33Updated 4 years ago
- PipeTransformer: Automated Elastic Pipelining for Distributed Training of Large-scale Models. ICML 2021☆56Updated 3 years ago
- [ICLR 2018] Deep Gradient Compression: Reducing the Communication Bandwidth for Distributed Training☆221Updated 9 months ago
- ☆74Updated 5 years ago
- Atomo: Communication-efficient Learning via Atomic Sparsification☆25Updated 6 years ago
- Sketched SGD☆28Updated 4 years ago
- Training neural networks in TensorFlow 2.0 with 5x less memory☆131Updated 3 years ago
- Sparsified SGD with Memory: https://arxiv.org/abs/1809.07599☆58Updated 6 years ago
- ☆93Updated 2 years ago
- Block-sparse primitives for PyTorch☆155Updated 4 years ago
- Code for "Adaptive Gradient Quantization for Data-Parallel SGD", published in NeurIPS 2020.☆30Updated 4 years ago
- Code for the signSGD paper☆84Updated 4 years ago
- Decentralized SGD and Consensus with Communication Compression: https://arxiv.org/abs/1907.09356☆68Updated 4 years ago
- Partial implementation of paper "DEEP GRADIENT COMPRESSION: REDUCING THE COMMUNICATION BANDWIDTH FOR DISTRIBUTED TRAINING"☆31Updated 4 years ago
- QSGD-TF☆21Updated 5 years ago
- Research and development for optimizing transformers☆126Updated 4 years ago
- Stochastic Gradient Push for Distributed Deep Learning☆164Updated 2 years ago
- Implementation of Parameter Server using PyTorch communication lib☆42Updated 6 years ago
- implement distributed machine learning with Pytorch + OpenMPI☆51Updated 6 years ago
- Efficient reference implementations of the static & dynamic M-FAC algorithms (for pruning and optimization)☆16Updated 3 years ago
- Code for paper "SWALP: Stochastic Weight Averaging forLow-Precision Training".☆62Updated 5 years ago
- gTop-k S-SGD: A Communication-Efficient Distributed Synchronous SGD Algorithm for Deep Learning☆36Updated 5 years ago
- [IJCAI2023] An automated parallel training system that combines the advantages from both data and model parallelism. If you have any inte…☆51Updated last year
- Simple Distributed Deep Learning on TensorFlow☆134Updated 2 years ago
- ☆22Updated 4 years ago
- Understanding Top-k Sparsification in Distributed Deep Learning☆24Updated 5 years ago
- Code for "Practical Low-Rank Communication Compression in Decentralized Deep Learning"☆16Updated 4 years ago
- Efficient LLM Inference Acceleration using Prompting☆47Updated 6 months ago