adri-romsor / FitNets
FitNets: Hints for Thin Deep Nets
☆204Updated 9 years ago
Related projects ⓘ
Alternatives and complementary repositories for FitNets
- Implements quantized distillation. Code for our paper "Model compression via distillation and quantization"☆329Updated 3 months ago
- TensorFlow Implementation of Deep Mutual Learning☆319Updated 6 years ago
- Using Teacher Assistants to Improve Knowledge Distillation: https://arxiv.org/pdf/1902.03393.pdf☆258Updated 5 years ago
- Implementation of model compression with knowledge distilling method.☆346Updated 7 years ago
- Caffe for Sparse and Low-rank Deep Neural Networks☆378Updated 4 years ago
- MetaPruning: Meta Learning for Automatic Neural Network Channel Pruning. In ICCV 2019.☆351Updated 4 years ago
- Official pytorch Implementation of Relational Knowledge Distillation, CVPR 2019☆394Updated 3 years ago
- Learning both Weights and Connections for Efficient Neural Networks https://arxiv.org/abs/1506.02626☆174Updated 2 years ago
- Code for SkipNet: Learning Dynamic Routing in Convolutional Networks (ECCV 2018)☆234Updated 5 years ago
- 3.41% and 17.11% error on CIFAR-10 and CIFAR-100☆328Updated 5 years ago
- NAS-Bench-201 API and Instruction☆625Updated 4 years ago
- Soft Filter Pruning for Accelerating Deep Convolutional Neural Networks☆376Updated 5 years ago
- Learning Efficient Convolutional Networks through Network Slimming, In ICCV 2017.☆560Updated 5 years ago
- a list of awesome papers on deep model ompression and acceleration☆349Updated 3 years ago
- Knowledge distillation methods implemented with Tensorflow (now there are 11 (+1) methods, and will be added more.)☆266Updated 5 years ago
- Codes for Layer-wise Optimal Brain Surgeon☆75Updated 5 years ago
- PyTorch implementation for Convolutional Networks with Adaptive Inference Graphs☆185Updated 6 years ago
- caffe model of ICCV'17 paper - ThiNet: A Filter Level Pruning Method for Deep Neural Network Compression https://arxiv.org/abs/1707.06342☆146Updated 6 years ago
- ConvNet training using pytorch☆347Updated 3 years ago
- Multi-Scale Dense Networks for Resource Efficient Image Classification (ICLR 2018 Oral)☆462Updated 5 years ago
- PyTorch implementation of "Distilling the Knowledge in a Neural Network" for model compression☆59Updated 7 years ago
- BlockDrop: Dynamic Inference Paths in Residual Networks☆140Updated last year
- Label Refinery: Improving ImageNet Classification through Label Progression☆280Updated 6 years ago
- Pytorch implementation of SNAS☆75Updated 5 years ago
- ☆213Updated 5 years ago
- Single-Path NAS: Designing Hardware-Efficient ConvNets in less than 4 Hours☆396Updated 3 years ago
- Knowledge Transfer via Distillation of Activation Boundaries Formed by Hidden Neurons (AAAI 2019)☆104Updated 5 years ago
- Path-Level Network Transformation for Efficient Architecture Search, in ICML 2018.☆113Updated 6 years ago
- Implementation of Data-free Knowledge Distillation for Deep Neural Networks (on arxiv!)☆79Updated 6 years ago
- Low-rank convolutional neural networks☆96Updated 8 years ago