wenwei202 / terngrad
Ternary Gradients to Reduce Communication in Distributed Deep Learning (TensorFlow)
☆182Updated 6 years ago
Alternatives and similar repositories for terngrad:
Users that are interested in terngrad are comparing it to the libraries listed below
- implement distributed machine learning with Pytorch + OpenMPI☆51Updated 6 years ago
- Caffe implementation for dynamic network surgery.☆186Updated 7 years ago
- Training Deep Neural Networks with binary weights during propagations☆378Updated 9 years ago
- Atomo: Communication-efficient Learning via Atomic Sparsification☆25Updated 6 years ago
- QSGD-TF☆21Updated 5 years ago
- PyTorch parameter server with MPI☆16Updated 6 years ago
- Caffe for Sparse and Low-rank Deep Neural Networks☆378Updated 5 years ago
- Implements quantized distillation. Code for our paper "Model compression via distillation and quantization"☆331Updated 7 months ago
- Stochastic Gradient Push for Distributed Deep Learning☆160Updated last year
- Code example for the ICLR 2018 oral paper☆151Updated 6 years ago
- gTop-k S-SGD: A Communication-Efficient Distributed Synchronous SGD Algorithm for Deep Learning☆36Updated 5 years ago
- Implementation of "Iterative pruning" on TensorFlow☆160Updated 3 years ago
- ☆402Updated 6 years ago
- ☆74Updated 5 years ago
- Training Deep Neural Networks with Weights and Activations Constrained to +1 or -1☆299Updated 3 years ago
- Implementation for Trained Ternary Network.☆108Updated 8 years ago
- GRACE - GRAdient ComprEssion for distributed deep learning☆138Updated 7 months ago
- Implementation of model compression with knowledge distilling method.☆343Updated 8 years ago
- Implementation of Ternary Weight Networks In Caffe☆63Updated 8 years ago
- Implementation of Parameter Server using PyTorch communication lib☆43Updated 5 years ago
- ☆125Updated last year
- Prune DNN using Alternating Direction Method of Multipliers (ADMM)☆108Updated 4 years ago
- ☆213Updated 6 years ago
- Sparse Recurrent Neural Networks -- Pruning Connections and Hidden Sizes (TensorFlow)☆74Updated 4 years ago
- Path-Level Network Transformation for Efficient Architecture Search, in ICML 2018.☆112Updated 6 years ago
- Quantize weights and activations in Recurrent Neural Networks.☆94Updated 6 years ago
- Sparsified SGD with Memory: https://arxiv.org/abs/1809.07599☆59Updated 6 years ago
- [ICLR 2018] Deep Gradient Compression: Reducing the Communication Bandwidth for Distributed Training☆217Updated 8 months ago
- Training Low-bits DNNs with Stochastic Quantization☆73Updated 7 years ago
- GPU-specialized parameter server for GPU machine learning.☆100Updated 6 years ago