epfml / powersgd
Practical low-rank gradient compression for distributed optimization: https://arxiv.org/abs/1905.13727
☆140Updated 2 weeks ago
Related projects: ⓘ
- GRACE - GRAdient ComprEssion for distributed deep learning☆134Updated last month
- ☆42Updated 4 years ago
- [ICLR 2018] Deep Gradient Compression: Reducing the Communication Bandwidth for Distributed Training☆209Updated 2 months ago
- Implementation of (overlap) local SGD in Pytorch☆32Updated 4 years ago
- Atomo: Communication-efficient Learning via Atomic Sparsification☆25Updated 5 years ago
- Sketched SGD☆28Updated 4 years ago
- Stochastic Gradient Push for Distributed Deep Learning☆156Updated last year
- PipeTransformer: Automated Elastic Pipelining for Distributed Training of Large-scale Models. ICML 2021☆54Updated 3 years ago
- QSGD-TF☆21Updated 5 years ago
- Code for the signSGD paper☆79Updated 3 years ago
- Research and development for optimizing transformers☆121Updated 3 years ago
- ☆72Updated 5 years ago
- Sparsified SGD with Memory: https://arxiv.org/abs/1809.07599☆54Updated 5 years ago
- Decentralized SGD and Consensus with Communication Compression: https://arxiv.org/abs/1907.09356☆63Updated 4 years ago
- Efficient reference implementations of the static & dynamic M-FAC algorithms (for pruning and optimization)☆16Updated 2 years ago
- implement distributed machine learning with Pytorch + OpenMPI☆52Updated 5 years ago
- gTop-k S-SGD: A Communication-Efficient Distributed Synchronous SGD Algorithm for Deep Learning☆35Updated 5 years ago
- Implementation of Parameter Server using PyTorch communication lib☆42Updated 5 years ago
- Partial implementation of paper "DEEP GRADIENT COMPRESSION: REDUCING THE COMMUNICATION BANDWIDTH FOR DISTRIBUTED TRAINING"☆31Updated 3 years ago
- Simple Distributed Deep Learning on TensorFlow☆134Updated last year
- Training neural networks in TensorFlow 2.0 with 5x less memory☆127Updated 2 years ago
- Block Sparse movement pruning☆77Updated 3 years ago
- Code for "Adaptive Gradient Quantization for Data-Parallel SGD", published in NeurIPS 2020.☆28Updated 3 years ago
- Accuracy 77%. Large batch deep learning optimizer LARS for ImageNet with PyTorch and ResNet, using Horovod for distribution. Optional acc…☆37Updated 3 years ago
- vector quantization for stochastic gradient descent.☆33Updated 4 years ago
- FTPipe and related pipeline model parallelism research.☆41Updated last year
- ☆62Updated 3 years ago
- [IJCAI2023] An automated parallel training system that combines the advantages from both data and model parallelism. If you have any inte…☆51Updated last year
- Understanding Top-k Sparsification in Distributed Deep Learning☆22Updated 4 years ago
- ☆22Updated 3 years ago