HKBU-HPML / OMGS-SGD
Layer-wise Sparsification of Distributed Deep Learning
☆10Updated 4 years ago
Related projects: ⓘ
- Understanding Top-k Sparsification in Distributed Deep Learning☆22Updated 4 years ago
- gTop-k S-SGD: A Communication-Efficient Distributed Synchronous SGD Algorithm for Deep Learning☆35Updated 5 years ago
- MG-WFBP: Merging Gradients Wisely for Efficient Communication in Distributed Deep Learning☆12Updated 3 years ago
- Dual-way gradient sparsification approach for async DNN training, based on PyTorch.☆11Updated last year
- Partial implementation of paper "DEEP GRADIENT COMPRESSION: REDUCING THE COMMUNICATION BANDWIDTH FOR DISTRIBUTED TRAINING"☆31Updated 3 years ago
- It is implementation of Research paper "DEEP GRADIENT COMPRESSION: REDUCING THE COMMUNICATION BANDWIDTH FOR DISTRIBUTED TRAINING". Deep g…☆18Updated 5 years ago
- Atomo: Communication-efficient Learning via Atomic Sparsification☆25Updated 5 years ago
- A computation-parallel deep learning architecture.☆12Updated 4 years ago
- ☆10Updated 3 years ago
- vector quantization for stochastic gradient descent.☆33Updated 4 years ago
- QSGD-TF☆21Updated 5 years ago
- [ACM SoCC'22] Pisces: Efficient Federated Learning via Guided Asynchronous Training☆10Updated 9 months ago
- ☆27Updated 4 years ago
- ☆13Updated 3 years ago
- Implementation of (overlap) local SGD in Pytorch☆32Updated 4 years ago
- Code for reproducing experiments performed for Accoridon☆12Updated 3 years ago
- [ICLR 2018] Deep Gradient Compression: Reducing the Communication Bandwidth for Distributed Training☆209Updated 2 months ago
- Sparsified SGD with Memory: https://arxiv.org/abs/1809.07599