antspy / quantized_distillation
Implements quantized distillation. Code for our paper "Model compression via distillation and quantization"
☆332Updated 7 months ago
Alternatives and similar repositories for quantized_distillation:
Users that are interested in quantized_distillation are comparing it to the libraries listed below
- ☆213Updated 6 years ago
- PyTorch Implementation of Weights Pruning☆185Updated 7 years ago
- a list of awesome papers on deep model ompression and acceleration☆351Updated 3 years ago
- Papers for deep neural network compression and acceleration☆396Updated 3 years ago
- Single-Path NAS: Designing Hardware-Efficient ConvNets in less than 4 Hours☆396Updated 4 years ago
- LQ-Nets: Learned Quantization for Highly Accurate and Compact Deep Neural Networks☆242Updated 2 years ago
- Pruning Neural Networks with Taylor criterion in Pytorch☆315Updated 5 years ago
- Learning both Weights and Connections for Efficient Neural Networks https://arxiv.org/abs/1506.02626☆177Updated 2 years ago
- MetaPruning: Meta Learning for Automatic Neural Network Channel Pruning. In ICCV 2019.☆354Updated 4 years ago
- Code for the NuerIPS'19 paper "Gate Decorator: Global Filter Pruning Method for Accelerating Deep Convolutional Neural Networks"☆196Updated 5 years ago
- A PyTorch implementation of "Incremental Network Quantization: Towards Lossless CNNs with Low-Precision Weights"☆167Updated 5 years ago
- [ECCV 2018] AMC: AutoML for Model Compression and Acceleration on Mobile Devices☆440Updated last year
- Soft Filter Pruning for Accelerating Deep Convolutional Neural Networks☆379Updated 5 years ago
- Neural architecture search(NAS)☆14Updated 5 years ago
- Code example for the ICLR 2018 oral paper☆151Updated 6 years ago
- [ECCV 2018] AMC: AutoML for Model Compression and Acceleration on Mobile Devices☆165Updated 4 years ago
- Mayo: Auto-generation of hardware-friendly deep neural networks. Dynamic Channel Pruning: Feature Boosting and Suppression.☆114Updated 5 years ago
- alibabacloud-quantization-networks☆121Updated 5 years ago
- Knowledge distillation methods implemented with Tensorflow (now there are 11 (+1) methods, and will be added more.)☆264Updated 5 years ago
- Codes for Layer-wise Optimal Brain Surgeon☆77Updated 6 years ago
- FitNets: Hints for Thin Deep Nets☆205Updated 9 years ago
- Code for SkipNet: Learning Dynamic Routing in Convolutional Networks (ECCV 2018)☆239Updated 5 years ago
- [CVPR 2020] APQ: Joint Search for Network Architecture, Pruning and Quantization Policy☆157Updated 4 years ago
- Implementation of model compression with knowledge distilling method.☆343Updated 8 years ago
- Prune DNN using Alternating Direction Method of Multipliers (ADMM)☆108Updated 4 years ago
- Network acceleration methods☆178Updated 3 years ago
- Caffe implementation for dynamic network surgery.☆186Updated 7 years ago
- CNN channel pruning, LeGR, MorphNet, AMC. Codebase for paper "LeGR: Filter Pruning via Learned Global Ranking"☆114Updated 4 years ago
- Code for “Discrimination-aware-Channel-Pruning-for-Deep-Neural-Networks”☆184Updated 4 years ago
- [CVPR 2020] This project is the PyTorch implementation of our accepted CVPR 2020 paper : forward and backward information retention for a…☆179Updated 5 years ago