adri-romsor / FitNets
FitNets: Hints for Thin Deep Nets
☆205Updated 9 years ago
Alternatives and similar repositories for FitNets:
Users that are interested in FitNets are comparing it to the libraries listed below
- Using Teacher Assistants to Improve Knowledge Distillation: https://arxiv.org/pdf/1902.03393.pdf☆257Updated 5 years ago
- Implementation of model compression with knowledge distilling method.☆343Updated 8 years ago
- Implements quantized distillation. Code for our paper "Model compression via distillation and quantization"☆332Updated 7 months ago
- TensorFlow Implementation of Deep Mutual Learning☆322Updated 6 years ago
- MetaPruning: Meta Learning for Automatic Neural Network Channel Pruning. In ICCV 2019.☆354Updated 4 years ago
- Knowledge distillation methods implemented with Tensorflow (now there are 11 (+1) methods, and will be added more.)☆264Updated 5 years ago
- Soft Filter Pruning for Accelerating Deep Convolutional Neural Networks☆379Updated 5 years ago
- NAS-Bench-201 API and Instruction☆627Updated 4 years ago
- Code for SkipNet: Learning Dynamic Routing in Convolutional Networks (ECCV 2018)☆239Updated 5 years ago
- PyTorch implementation of "Distilling the Knowledge in a Neural Network" for model compression☆59Updated 7 years ago
- Label Refinery: Improving ImageNet Classification through Label Progression☆279Updated 6 years ago
- Official pytorch Implementation of Relational Knowledge Distillation, CVPR 2019☆395Updated 3 years ago
- BlockDrop: Dynamic Inference Paths in Residual Networks☆142Updated 2 years ago
- Base to channel pruned to ResNet18 model☆145Updated 2 years ago
- Single-Path NAS: Designing Hardware-Efficient ConvNets in less than 4 Hours☆396Updated 4 years ago
- Neural architecture search(NAS)☆14Updated 5 years ago
- Caffe for Sparse and Low-rank Deep Neural Networks☆378Updated 5 years ago
- Caffe implementation for dynamic network surgery.☆186Updated 7 years ago
- caffe model of ICCV'17 paper - ThiNet: A Filter Level Pruning Method for Deep Neural Network Compression https://arxiv.org/abs/1707.06342☆146Updated 6 years ago
- 3.41% and 17.11% error on CIFAR-10 and CIFAR-100☆331Updated 6 years ago
- Learning Efficient Convolutional Networks through Network Slimming, In ICCV 2017.☆567Updated 5 years ago
- ☆213Updated 6 years ago
- Codes for our paper "Progressive Differentiable Architecture Search:Bridging the Depth Gap between Search and Evaluation"☆362Updated 5 years ago
- Learning both Weights and Connections for Efficient Neural Networks https://arxiv.org/abs/1506.02626☆177Updated 2 years ago
- Knowledge Transfer via Distillation of Activation Boundaries Formed by Hidden Neurons (AAAI 2019)☆104Updated 5 years ago
- Multi-Scale Dense Networks for Resource Efficient Image Classification (ICLR 2018 Oral)☆461Updated 5 years ago
- ☆134Updated 6 years ago
- PyTorch implementation for Convolutional Networks with Adaptive Inference Graphs☆185Updated 6 years ago
- Tools for computing model parameters and FLOPs.☆86Updated 6 years ago
- Low-rank convolutional neural networks☆96Updated 8 years ago