epfml / powersgdLinks
Practical low-rank gradient compression for distributed optimization: https://arxiv.org/abs/1905.13727
☆147Updated 7 months ago
Alternatives and similar repositories for powersgd
Users that are interested in powersgd are comparing it to the libraries listed below
Sorting:
- GRACE - GRAdient ComprEssion for distributed deep learning☆140Updated 11 months ago
- ☆46Updated 5 years ago
- Atomo: Communication-efficient Learning via Atomic Sparsification☆27Updated 6 years ago
- PipeTransformer: Automated Elastic Pipelining for Distributed Training of Large-scale Models. ICML 2021☆56Updated 3 years ago
- Implementation of (overlap) local SGD in Pytorch☆33Updated 4 years ago
- ☆75Updated 6 years ago
- QSGD-TF☆21Updated 6 years ago
- [ICLR 2018] Deep Gradient Compression: Reducing the Communication Bandwidth for Distributed Training☆222Updated 11 months ago
- Sketched SGD☆28Updated 4 years ago
- Sparsified SGD with Memory: https://arxiv.org/abs/1809.07599☆58Updated 6 years ago
- Code for the signSGD paper☆88Updated 4 years ago
- Training neural networks in TensorFlow 2.0 with 5x less memory☆132Updated 3 years ago
- ☆208Updated 2 years ago
- Efficient reference implementations of the static & dynamic M-FAC algorithms (for pruning and optimization)☆17Updated 3 years ago
- gTop-k S-SGD: A Communication-Efficient Distributed Synchronous SGD Algorithm for Deep Learning☆36Updated 5 years ago
- Decentralized SGD and Consensus with Communication Compression: https://arxiv.org/abs/1907.09356☆70Updated 4 years ago
- Stochastic Gradient Push for Distributed Deep Learning☆168Updated 2 years ago
- Research and development for optimizing transformers☆129Updated 4 years ago
- ☆42Updated 2 years ago
- [IJCAI2023] An automated parallel training system that combines the advantages from both data and model parallelism. If you have any inte…☆51Updated 2 years ago
- Partial implementation of paper "DEEP GRADIENT COMPRESSION: REDUCING THE COMMUNICATION BANDWIDTH FOR DISTRIBUTED TRAINING"☆31Updated 4 years ago
- Accuracy 77%. Large batch deep learning optimizer LARS for ImageNet with PyTorch and ResNet, using Horovod for distribution. Optional acc…☆38Updated 4 years ago
- Code for "Adaptive Gradient Quantization for Data-Parallel SGD", published in NeurIPS 2020.☆30Updated 4 years ago
- Understanding Top-k Sparsification in Distributed Deep Learning☆24Updated 5 years ago
- ☆99Updated last year
- Simple Distributed Deep Learning on TensorFlow☆133Updated last week
- Efficient LLM Inference Acceleration using Prompting☆48Updated 8 months ago
- ☆80Updated last month
- ActNN: Reducing Training Memory Footprint via 2-Bit Activation Compressed Training☆200Updated 2 years ago
- Code for the NeurIPS 2022 paper "Optimal Brain Compression: A Framework for Accurate Post-Training Quantization and Pruning".☆122Updated last year