HKBU-HPML / OMGS-SGDLinks
Layer-wise Sparsification of Distributed Deep Learning
☆10Updated 5 years ago
Alternatives and similar repositories for OMGS-SGD
Users that are interested in OMGS-SGD are comparing it to the libraries listed below
Sorting:
- gTop-k S-SGD: A Communication-Efficient Distributed Synchronous SGD Algorithm for Deep Learning☆36Updated 6 years ago
- A computation-parallel deep learning architecture.☆13Updated 6 years ago
- Understanding Top-k Sparsification in Distributed Deep Learning☆24Updated 5 years ago
- Code for reproducing experiments performed for Accoridon☆13Updated 4 years ago
- ☆10Updated 4 years ago
- Dual-way gradient sparsification approach for async DNN training, based on PyTorch.☆11Updated 2 years ago
- Partial implementation of paper "DEEP GRADIENT COMPRESSION: REDUCING THE COMMUNICATION BANDWIDTH FOR DISTRIBUTED TRAINING"☆31Updated 4 years ago
- [ACM SoCC'22] Pisces: Efficient Federated Learning via Guided Asynchronous Training☆13Updated 5 months ago
- MG-WFBP: Merging Gradients Wisely for Efficient Communication in Distributed Deep Learning☆12Updated 4 years ago
- Implementation of Parameter Server using PyTorch communication lib☆42Updated 6 years ago
- GRACE - GRAdient ComprEssion for distributed deep learning☆140Updated last year
- Hi-Speed DNN Training with Espresso: Unleashing the Full Potential of Gradient Compression with Near-Optimal Usage Strategies (EuroSys '2…☆15Updated 2 years ago
- ☆15Updated 4 years ago
- Atomo: Communication-efficient Learning via Atomic Sparsification☆27Updated 6 years ago
- A Cluster-Wide Model Manager to Accelerate DNN Training via Automated Training Warmup☆35Updated 2 years ago
- [ICLR 2018] Deep Gradient Compression: Reducing the Communication Bandwidth for Distributed Training☆224Updated last year
- Create tiny ML systems for on-device learning.☆20Updated 4 years ago
- Ok-Topk is a scheme for distributed training with sparse gradients. Ok-Topk integrates a novel sparse allreduce algorithm (less than 6k c…☆27Updated 2 years ago
- Sparsified SGD with Memory: https://arxiv.org/abs/1809.07599☆59Updated 6 years ago
- It is implementation of Research paper "DEEP GRADIENT COMPRESSION: REDUCING THE COMMUNICATION BANDWIDTH FOR DISTRIBUTED TRAINING". Deep g…☆18Updated 6 years ago
- vector quantization for stochastic gradient descent.☆35Updated 5 years ago
- A Generic Resource-Aware Hyperparameter Tuning Execution Engine☆15Updated 3 years ago
- Decentralized SGD and Consensus with Communication Compression: https://arxiv.org/abs/1907.09356☆72Updated 5 years ago
- QSGD-TF☆21Updated 6 years ago
- Code for the signSGD paper☆90Updated 4 years ago
- We present a set of all-reduce compatible gradient compression algorithms which significantly reduce the communication overhead while mai…☆10Updated 3 years ago
- Oort: Efficient Federated Learning via Guided Participant Selection☆128Updated 3 years ago
- Stochastic Gradient Push for Distributed Deep Learning☆169Updated 2 years ago
- Algorithm: Decentralized Parallel Stochastic Gradient Descent☆45Updated 7 years ago
- [ICDCS 2023] Evaluation and Optimization of Gradient Compression for Distributed Deep Learning☆10Updated 2 years ago