Parallel SGD, done locally and remote
☆14May 19, 2016Updated 9 years ago
Alternatives and similar repositories for Distributed-SGD
Users that are interested in Distributed-SGD are comparing it to the libraries listed below
Sorting:
- Atomo: Communication-efficient Learning via Atomic Sparsification☆28Dec 9, 2018Updated 7 years ago
- Algorithm: Decentralized Parallel Stochastic Gradient Descent☆47Sep 2, 2018Updated 7 years ago
- DropNet: Reducing Neural Network Complexity via Iterative Pruning (ICML 2020)☆16Aug 24, 2020Updated 5 years ago
- Stochastic Gradient Push for Distributed Deep Learning☆171Apr 5, 2023Updated 2 years ago
- A PyTorch implementation of Attention is all you need☆43Oct 16, 2018Updated 7 years ago
- ☆17May 10, 2019Updated 6 years ago
- LIBS2ML: A Library for Scalable Second Order Machine Learning Algorithms☆12Sep 14, 2021Updated 4 years ago
- Advanced optimizer with Gradient-Centralization☆21Aug 26, 2020Updated 5 years ago
- ☆46Mar 4, 2020Updated 6 years ago
- ☆16Dec 21, 2019Updated 6 years ago
- ☆22Sep 28, 2018Updated 7 years ago
- Various experiments on the [Fashion-MNIST](https://github.com/zalandoresearch/fashion-mnist) dataset from Zalando☆31Sep 28, 2017Updated 8 years ago
- Tree-LSTM + Self-Structured Attention -- a method to summarize textual data by topics☆10Apr 26, 2018Updated 7 years ago
- Distributed learning with mpi4py