antspy / quantized_distillation
Implements quantized distillation. Code for our paper "Model compression via distillation and quantization"
☆332Updated 8 months ago
Alternatives and similar repositories for quantized_distillation:
Users that are interested in quantized_distillation are comparing it to the libraries listed below
- ☆213Updated 6 years ago
- LQ-Nets: Learned Quantization for Highly Accurate and Compact Deep Neural Networks☆242Updated 2 years ago
- PyTorch Implementation of Weights Pruning☆185Updated 7 years ago
- MetaPruning: Meta Learning for Automatic Neural Network Channel Pruning. In ICCV 2019.☆354Updated 4 years ago
- Learning both Weights and Connections for Efficient Neural Networks https://arxiv.org/abs/1506.02626☆177Updated 2 years ago
- Code example for the ICLR 2018 oral paper☆151Updated 6 years ago
- A PyTorch implementation of "Incremental Network Quantization: Towards Lossless CNNs with Low-Precision Weights"☆167Updated 5 years ago
- alibabacloud-quantization-networks☆122Updated 5 years ago
- a list of awesome papers on deep model ompression and acceleration☆351Updated 3 years ago
- Caffe Implementation for Incremental network quantization☆192Updated 6 years ago
- [ECCV 2018] AMC: AutoML for Model Compression and Acceleration on Mobile Devices☆166Updated 4 years ago
- Papers for deep neural network compression and acceleration☆397Updated 3 years ago
- Caffe implementation for dynamic network surgery.☆186Updated 7 years ago
- Soft Filter Pruning for Accelerating Deep Convolutional Neural Networks☆380Updated 5 years ago
- Codes for Layer-wise Optimal Brain Surgeon☆77Updated 6 years ago
- A Pytorch implementation of Neural Network Compression (pruning, deep compression, channel pruning)☆154Updated 9 months ago
- Code for the NuerIPS'19 paper "Gate Decorator: Global Filter Pruning Method for Accelerating Deep Convolutional Neural Networks"☆196Updated 5 years ago
- [ECCV 2018] AMC: AutoML for Model Compression and Acceleration on Mobile Devices☆440Updated last year
- Pruning Neural Networks with Taylor criterion in Pytorch☆317Updated 5 years ago
- Single-Path NAS: Designing Hardware-Efficient ConvNets in less than 4 Hours☆395Updated 4 years ago
- [CVPR'20] ZeroQ: A Novel Zero Shot Quantization Framework☆277Updated last year
- Reducing the size of convolutional neural networks☆113Updated 7 years ago
- Network acceleration methods☆178Updated 3 years ago
- Mayo: Auto-generation of hardware-friendly deep neural networks. Dynamic Channel Pruning: Feature Boosting and Suppression.☆114Updated 5 years ago
- Prune DNN using Alternating Direction Method of Multipliers (ADMM)☆108Updated 4 years ago
- FitNets: Hints for Thin Deep Nets☆206Updated 9 years ago
- Quantization of Convolutional Neural networks.☆244Updated 8 months ago
- Caffe for Sparse and Low-rank Deep Neural Networks☆378Updated 5 years ago
- PyTorch Implementation of [1611.06440] Pruning Convolutional Neural Networks for Resource Efficient Inference☆878Updated 5 years ago
- ConvNet training using pytorch☆345Updated 4 years ago