epfml / error-feedback-SGD
SGD with compressed gradients and error-feedback: https://arxiv.org/abs/1901.09847
☆29Updated last month
Related projects: ⓘ
- Code for the signSGD paper☆79Updated 3 years ago
- vector quantization for stochastic gradient descent.☆33Updated 4 years ago
- Sparsified SGD with Memory: https://arxiv.org/abs/1809.07599☆54Updated 5 years ago
- Atomo: Communication-efficient Learning via Atomic Sparsification☆25Updated 5 years ago
- ☆72Updated 5 years ago
- Decentralized SGD and Consensus with Communication Compression: https://arxiv.org/abs/1907.09356☆63Updated 4 years ago
- ☆42Updated 4 years ago
- Implementation of Compressed SGD with Compressed Gradients in Pytorch☆12Updated last month
- PyTorch implementation of ICML 2017 paper, SplitNet: Learning to Semantically Split Deep Networks for Parameter Reduction and Model Paral…☆17Updated 6 years ago
- Implementation of (overlap) local SGD in Pytorch☆32Updated 4 years ago
- Partial implementation of paper "DEEP GRADIENT COMPRESSION: REDUCING THE COMMUNICATION BANDWIDTH FOR DISTRIBUTED TRAINING"☆31Updated 3 years ago
- R-GAP: Recursive Gradient Attack on Privacy [Accepted at ICLR 2021]☆33Updated last year
- Salvaging Federated Learning by Local Adaptation☆55Updated last month
- FedDANE: A Federated Newton-Type Method (Asilomar Conference on Signals, Systems, and Computers ‘19)☆24Updated last year
- FedNAS: Federated Deep Learning via Neural Architecture Search☆50Updated 3 years ago
- Code for "Adaptive Gradient Quantization for Data-Parallel SGD", published in NeurIPS 2020.☆28Updated 3 years ago
- Understanding Top-k Sparsification in Distributed Deep Learning☆22Updated 4 years ago
- [ICLR2022] Efficient Split-Mix federated learning for in-situ model customization during both training and testing time☆40Updated last year
- Code and checkpoints of compressed networks for the paper titled "HYDRA: Pruning Adversarially Robust Neural Networks" (NeurIPS 2020) (ht…☆88Updated last year
- ☆22Updated this week
- Code for "Federated Accelerated Stochastic Gradient Descent" (NeurIPS 2020)☆14Updated 3 years ago
- Benchmarking Semi-supervised Federated Learning☆52Updated 2 years ago
- PyTorch for benchmarking communication-efficient distributed SGD optimization algorithms☆71Updated 3 years ago
- Algorithm: Decentralized Parallel Stochastic Gradient Descent☆41Updated 6 years ago
- Federated learning with PyTorch (federated averaging and consensus optimization): with 'reduced' bandwidth☆41Updated 4 months ago
- ☆22Updated 3 years ago
- A modular evaluation metrics and a benchmark for large-scale federated learning☆12Updated last month
- Code for paper: Variance Reduced Local SGD with Lower Communication Complexity☆12Updated 4 years ago
- ☆27Updated 4 years ago
- Sketched SGD☆28Updated 4 years ago