epfml / cola
CoLa - Decentralized Linear Learning: https://arxiv.org/abs/1808.04883
☆20Updated 3 years ago
Alternatives and similar repositories for cola:
Users that are interested in cola are comparing it to the libraries listed below
- ☆74Updated 5 years ago
- Sketched SGD☆28Updated 4 years ago
- Code for the signSGD paper☆83Updated 4 years ago
- A compressed adaptive optimizer for training large-scale deep learning models using PyTorch☆27Updated 5 years ago
- Decentralized SGD and Consensus with Communication Compression: https://arxiv.org/abs/1907.09356☆66Updated 4 years ago
- Federated posterior averaging implemented in JAX☆51Updated last year
- Sparsified SGD with Memory: https://arxiv.org/abs/1809.07599☆58Updated 6 years ago
- Atomo: Communication-efficient Learning via Atomic Sparsification☆25Updated 6 years ago
- The source code to reproduce the results reported in the 'Federated Online Learning to Rank with Evolution Strategies' paper, published a…☆33Updated 3 years ago
- Tilted Empirical Risk Minimization (ICLR '21)☆59Updated last year
- implement distributed machine learning with Pytorch + OpenMPI☆51Updated 5 years ago
- SmoothOut: Smoothing Out Sharp Minima to Improve Generalization in Deep Learning☆23Updated 6 years ago
- ☆45Updated 4 years ago
- Stochastic Gradient Push for Distributed Deep Learning☆160Updated last year
- Implementation of (overlap) local SGD in Pytorch☆33Updated 4 years ago
- ☆27Updated 2 years ago
- SGD with compressed gradients and error-feedback: https://arxiv.org/abs/1901.09847☆31Updated 6 months ago
- ☆30Updated 4 years ago
- Implementation of the paper "Meta-Learning by Adjusting Priors Based on Extended PAC-Bayes Theory", Ron Amit and Ron Meir, ICML 2018☆18Updated 3 years ago
- Simple Hierarchical Count Sketch in Python☆20Updated 3 years ago
- Lua implementation of Entropy-SGD☆81Updated 6 years ago
- Algorithm: Decentralized Parallel Stochastic Gradient Descent☆41Updated 6 years ago
- FedDANE: A Federated Newton-Type Method (Asilomar Conference on Signals, Systems, and Computers ‘19)☆25Updated last year
- Randomized Smoothing of All Shapes and Sizes (ICML 2020).☆52Updated 4 years ago
- Provable Robustness of ReLU networks via Maximization of Linear Regions [AISTATS 2019]☆32Updated 4 years ago
- Code for reproducing the experiments of the ICML 2019 paper "Robust Learning from Untrusted Sources"☆19Updated 5 years ago
- Convolutional Neural Tangent Kernel☆109Updated 5 years ago
- Code for "Federated Accelerated Stochastic Gradient Descent" (NeurIPS 2020)