antspy / quantized_distillation
Implements quantized distillation. Code for our paper "Model compression via distillation and quantization"
☆332Updated 6 months ago
Alternatives and similar repositories for quantized_distillation:
Users that are interested in quantized_distillation are comparing it to the libraries listed below
- ☆213Updated 6 years ago
- Learning both Weights and Connections for Efficient Neural Networks https://arxiv.org/abs/1506.02626☆176Updated 2 years ago
- MetaPruning: Meta Learning for Automatic Neural Network Channel Pruning. In ICCV 2019.☆353Updated 4 years ago
- LQ-Nets: Learned Quantization for Highly Accurate and Compact Deep Neural Networks☆242Updated 2 years ago
- [ECCV 2018] AMC: AutoML for Model Compression and Acceleration on Mobile Devices☆166Updated 3 years ago
- a list of awesome papers on deep model ompression and acceleration☆351Updated 3 years ago
- Code for the NuerIPS'19 paper "Gate Decorator: Global Filter Pruning Method for Accelerating Deep Convolutional Neural Networks"☆196Updated 5 years ago
- PyTorch Implementation of Weights Pruning☆185Updated 7 years ago
- A PyTorch implementation of "Incremental Network Quantization: Towards Lossless CNNs with Low-Precision Weights"☆167Updated 4 years ago
- Code example for the ICLR 2018 oral paper☆151Updated 6 years ago
- Papers for deep neural network compression and acceleration☆396Updated 3 years ago
- Mayo: Auto-generation of hardware-friendly deep neural networks. Dynamic Channel Pruning: Feature Boosting and Suppression.☆114Updated 5 years ago
- [ECCV 2018] AMC: AutoML for Model Compression and Acceleration on Mobile Devices☆437Updated last year
- A Pytorch implementation of Neural Network Compression (pruning, deep compression, channel pruning)☆155Updated 7 months ago
- Single-Path NAS: Designing Hardware-Efficient ConvNets in less than 4 Hours☆396Updated 4 years ago
- Caffe Implementation for Incremental network quantization☆190Updated 6 years ago
- alibabacloud-quantization-networks☆121Updated 5 years ago
- Quantization of Convolutional Neural networks.☆243Updated 6 months ago
- Caffe implementation for dynamic network surgery.☆186Updated 7 years ago
- [CVPR 2020] APQ: Joint Search for Network Architecture, Pruning and Quantization Policy☆157Updated 4 years ago
- FitNets: Hints for Thin Deep Nets☆204Updated 9 years ago
- Codes for Layer-wise Optimal Brain Surgeon☆77Updated 6 years ago
- Pruning Neural Networks with Taylor criterion in Pytorch☆315Updated 5 years ago
- Prune DNN using Alternating Direction Method of Multipliers (ADMM)☆108Updated 4 years ago
- Path-Level Network Transformation for Efficient Architecture Search, in ICML 2018.☆112Updated 6 years ago
- Implementation for Trained Ternary Network.☆108Updated 8 years ago
- Implementation of model compression with knowledge distilling method.☆343Updated 8 years ago
- caffe model of ICCV'17 paper - ThiNet: A Filter Level Pruning Method for Deep Neural Network Compression https://arxiv.org/abs/1707.06342☆146Updated 6 years ago
- Reducing the size of convolutional neural networks☆114Updated 7 years ago
- Soft Filter Pruning for Accelerating Deep Convolutional Neural Networks☆378Updated 5 years ago