hwang595 / DracoLinks
DRACO: Byzantine-resilient Distributed Training via Redundant Gradients
☆23Updated 6 years ago
Alternatives and similar repositories for Draco
Users that are interested in Draco are comparing it to the libraries listed below
Sorting:
- Sketched SGD☆28Updated 5 years ago
- Stochastic Gradient Push for Distributed Deep Learning☆169Updated 2 years ago
- Implementation of Parameter Server using PyTorch communication lib☆42Updated 6 years ago
- Sparsified SGD with Memory: https://arxiv.org/abs/1809.07599☆59Updated 7 years ago
- ☆77Updated 6 years ago
- FedNAS: Federated Deep Learning via Neural Architecture Search☆54Updated 4 years ago
- Code for the signSGD paper☆90Updated 4 years ago
- A compressed adaptive optimizer for training large-scale deep learning models using PyTorch☆26Updated 5 years ago
- Atomo: Communication-efficient Learning via Atomic Sparsification☆27Updated 6 years ago
- InstaHide: Instance-hiding Schemes for Private Distributed Learning☆50Updated 5 years ago
- Simple Hierarchical Count Sketch in Python☆21Updated 4 years ago
- The Search for Sparse, Robustness Neural Networks☆11Updated 2 years ago
- SGD with compressed gradients and error-feedback: https://arxiv.org/abs/1901.09847☆31Updated last year
- implement distributed machine learning with Pytorch + OpenMPI☆52Updated 6 years ago
- vector quantization for stochastic gradient descent.☆35Updated 5 years ago
- Byzantine-resilient distributed SGD with TensorFlow.☆40Updated 4 years ago
- Partial implementation of paper "DEEP GRADIENT COMPRESSION: REDUCING THE COMMUNICATION BANDWIDTH FOR DISTRIBUTED TRAINING"☆31Updated 5 years ago
- ☆30Updated 5 years ago
- Code for LIT, ICML 2019☆20Updated 6 years ago
- Algorithm: Decentralized Parallel Stochastic Gradient Descent☆46Updated 7 years ago
- ☆46Updated 5 years ago
- [ICLR 2018] Deep Gradient Compression: Reducing the Communication Bandwidth for Distributed Training☆225Updated last year
- Decentralized SGD and Consensus with Communication Compression: https://arxiv.org/abs/1907.09356☆72Updated 5 years ago
- Bayesian Nonparametric Federated Learning of Neural Networks☆146Updated 6 years ago
- Implementation of Compressed SGD with Compressed Gradients in Pytorch☆13Updated last year
- Dual-way gradient sparsification approach for async DNN training, based on PyTorch.☆11Updated 2 years ago
- Code for Double Blind CollaborativeLearning (DBCL)☆14Updated 4 years ago
- Understanding Top-k Sparsification in Distributed Deep Learning☆24Updated 6 years ago
- Source code of ICLR2020 submisstion: Zeno++: Robust Fully Asynchronous SGD☆13Updated 5 years ago
- Federated Multi-Task Learning☆131Updated 7 years ago