wenwei202 / terngrad
Ternary Gradients to Reduce Communication in Distributed Deep Learning (TensorFlow)
☆181Updated 5 years ago
Related projects ⓘ
Alternatives and complementary repositories for terngrad
- implement distributed machine learning with Pytorch + OpenMPI☆51Updated 5 years ago
- Caffe implementation for dynamic network surgery.☆186Updated 7 years ago
- QSGD-TF☆21Updated 5 years ago
- Implementation of "Iterative pruning" on TensorFlow☆161Updated 3 years ago
- Caffe for Sparse and Low-rank Deep Neural Networks☆378Updated 4 years ago
- PyTorch parameter server with MPI☆16Updated 6 years ago
- Atomo: Communication-efficient Learning via Atomic Sparsification☆25Updated 5 years ago
- Path-Level Network Transformation for Efficient Architecture Search, in ICML 2018.☆113Updated 6 years ago
- Implementation of model compression with knowledge distilling method.☆346Updated 7 years ago
- Efficient Architecture Search by Network Transformation, in AAAI 2018☆170Updated 5 years ago
- Stochastic Gradient Push for Distributed Deep Learning☆157Updated last year
- Implements quantized distillation. Code for our paper "Model compression via distillation and quantization"☆329Updated 3 months ago
- Prune DNN using Alternating Direction Method of Multipliers (ADMM)☆106Updated 4 years ago
- GPU-specialized parameter server for GPU machine learning.☆100Updated 6 years ago
- Training Deep Neural Networks with binary weights during propagations☆378Updated 8 years ago
- Code example for the ICLR 2018 oral paper☆149Updated 6 years ago
- ☆53Updated 6 years ago
- Caffe for Sparse Convolutional Neural Network☆238Updated last year
- Implementation of Parameter Server using PyTorch communication lib☆43Updated 5 years ago
- gTop-k S-SGD: A Communication-Efficient Distributed Synchronous SGD Algorithm for Deep Learning☆35Updated 5 years ago
- PyTorch Implementation of Weights Pruning☆184Updated 6 years ago
- Implementation for Trained Ternary Network.☆108Updated 7 years ago
- ☆73Updated 5 years ago
- Sparse Recurrent Neural Networks -- Pruning Connections and Hidden Sizes (TensorFlow)☆73Updated 4 years ago
- Quantize weights and activations in Recurrent Neural Networks.☆94Updated 6 years ago
- ☆213Updated 5 years ago
- Caffe Implementation for Incremental network quantization☆191Updated 6 years ago
- GRACE - GRAdient ComprEssion for distributed deep learning☆138Updated 3 months ago
- Training deep neural networks with low precision multiplications☆63Updated 9 years ago
- Codes for Layer-wise Optimal Brain Surgeon☆75Updated 5 years ago