wenwei202 / terngradLinks
Ternary Gradients to Reduce Communication in Distributed Deep Learning (TensorFlow)
☆182Updated 6 years ago
Alternatives and similar repositories for terngrad
Users that are interested in terngrad are comparing it to the libraries listed below
Sorting:
- implement distributed machine learning with Pytorch + OpenMPI☆51Updated 6 years ago
- Caffe implementation for dynamic network surgery.☆187Updated 7 years ago
- QSGD-TF☆21Updated 6 years ago
- PyTorch parameter server with MPI☆16Updated 7 years ago
- Atomo: Communication-efficient Learning via Atomic Sparsification☆27Updated 6 years ago
- Implementation of "Iterative pruning" on TensorFlow☆160Updated 4 years ago
- gTop-k S-SGD: A Communication-Efficient Distributed Synchronous SGD Algorithm for Deep Learning☆36Updated 5 years ago
- Stochastic Gradient Push for Distributed Deep Learning☆168Updated 2 years ago
- Path-Level Network Transformation for Efficient Architecture Search, in ICML 2018.☆112Updated 7 years ago
- Implements quantized distillation. Code for our paper "Model compression via distillation and quantization"☆335Updated 11 months ago
- Prune DNN using Alternating Direction Method of Multipliers (ADMM)☆108Updated 4 years ago
- Caffe for Sparse and Low-rank Deep Neural Networks☆379Updated 5 years ago
- ☆75Updated 6 years ago
- Implementation for Trained Ternary Network.☆107Updated 8 years ago
- Quantize weights and activations in Recurrent Neural Networks.☆94Updated 6 years ago
- Code example for the ICLR 2018 oral paper☆152Updated 7 years ago
- GPU-specialized parameter server for GPU machine learning.☆101Updated 7 years ago
- ☆47Updated 5 years ago
- Implementation of model compression with knowledge distilling method.☆343Updated 8 years ago
- Implementation of Parameter Server using PyTorch communication lib☆42Updated 6 years ago
- ☆130Updated last year
- Low-rank convolutional neural networks☆97Updated 9 years ago
- Training Low-bits DNNs with Stochastic Quantization☆74Updated 7 years ago
- Training Deep Neural Networks with binary weights during propagations☆381Updated 9 years ago
- Mayo: Auto-generation of hardware-friendly deep neural networks. Dynamic Channel Pruning: Feature Boosting and Suppression.☆115Updated 5 years ago
- ☆213Updated 6 years ago
- Implementation of Ternary Weight Networks In Caffe☆63Updated 8 years ago
- Caffe for Sparse Convolutional Neural Network☆237Updated 2 years ago
- Training deep neural networks with low precision multiplications☆63Updated 9 years ago
- Benchmarking State-of-the-Art Deep Learning Software Tools☆169Updated 7 years ago