hwang595 / ps_pytorch
implement distributed machine learning with Pytorch + OpenMPI
☆51Updated 5 years ago
Related projects ⓘ
Alternatives and complementary repositories for ps_pytorch
- PyTorch parameter server with MPI☆16Updated 6 years ago
- ☆74Updated 5 years ago
- Sparsified SGD with Memory: https://arxiv.org/abs/1809.07599☆56Updated 6 years ago
- Stochastic Gradient Push for Distributed Deep Learning☆158Updated last year
- Ternary Gradients to Reduce Communication in Distributed Deep Learning (TensorFlow)☆182Updated 6 years ago
- Implementation of Parameter Server using PyTorch communication lib☆43Updated 5 years ago
- QSGD-TF☆21Updated 5 years ago
- Atomo: Communication-efficient Learning via Atomic Sparsification☆25Updated 5 years ago
- CoLa - Decentralized Linear Learning: https://arxiv.org/abs/1808.04883☆19Updated 2 years ago
- gTop-k S-SGD: A Communication-Efficient Distributed Synchronous SGD Algorithm for Deep Learning☆35Updated 5 years ago
- Code for the signSGD paper☆81Updated 3 years ago
- A compressed adaptive optimizer for training large-scale deep learning models using PyTorch☆27Updated 4 years ago
- ☆43Updated 4 years ago
- Implementation of (overlap) local SGD in Pytorch☆32Updated 4 years ago
- Algorithm: Decentralized Parallel Stochastic Gradient Descent☆41Updated 6 years ago
- ☆12Updated 6 years ago
- Code for ICML 2017 paper, SplitNet: Learning to Semantically Split Deep Networks for Parameter Reduction and Model Parallelization☆55Updated 7 years ago
- An analytical performance modeling tool for deep neural networks.☆87Updated 4 years ago
- Sketched SGD☆28Updated 4 years ago
- Implementing Google's DistBelief paper☆108Updated last year
- ☆53Updated 6 years ago
- DRACO: Byzantine-resilient Distributed Training via Redundant Gradients☆23Updated 5 years ago
- GPU-specialized parameter server for GPU machine learning.☆100Updated 6 years ago
- Code for paper "SWALP: Stochastic Weight Averaging forLow-Precision Training".☆62Updated 5 years ago
- Practical low-rank gradient compression for distributed optimization: https://arxiv.org/abs/1905.13727☆144Updated 3 weeks ago
- Dual-way gradient sparsification approach for async DNN training, based on PyTorch.☆11Updated last year
- ddl-benchmarks: Benchmarks for Distributed Deep Learning☆37Updated 4 years ago
- A PyTorch implementation of the paper "Decoupled Parallel Backpropagation with Convergence Guarantee"☆30Updated 6 years ago
- papers on scalable and efficient machine learning systems☆192Updated 6 years ago
- SmoothOut: Smoothing Out Sharp Minima to Improve Generalization in Deep Learning☆23Updated 6 years ago