DushyantaDhyani / kdtf
Knowledge Distillation using Tensorflow
☆142Updated 5 years ago
Alternatives and similar repositories for kdtf:
Users that are interested in kdtf are comparing it to the libraries listed below
- Knowledge distillation methods implemented with Tensorflow (now there are 11 (+1) methods, and will be added more.)☆264Updated 5 years ago
- Implementation of model compression with knowledge distilling method.☆343Updated 8 years ago
- A machine learning experiment☆180Updated 7 years ago
- Knowledge Transfer via Distillation of Activation Boundaries Formed by Hidden Neurons (AAAI 2019)☆104Updated 5 years ago
- The codes for recent knowledge distillation algorithms and benchmark results via TF2.0 low-level API☆109Updated 2 years ago
- Teaches a student network from the knowledge obtained via training of a larger teacher network☆157Updated 6 years ago
- Network acceleration methods☆178Updated 3 years ago
- Using Teacher Assistants to Improve Knowledge Distillation: https://arxiv.org/pdf/1902.03393.pdf☆257Updated 5 years ago
- Implements quantized distillation. Code for our paper "Model compression via distillation and quantization"☆332Updated 7 months ago
- a list of awesome papers on deep model ompression and acceleration☆351Updated 3 years ago
- Label Refinery: Improving ImageNet Classification through Label Progression☆279Updated 6 years ago
- A large scale study of Knowledge Distillation.☆219Updated 4 years ago
- PyTorch implementation of "Distilling the Knowledge in a Neural Network" for model compression☆59Updated 7 years ago
- FitNets: Hints for Thin Deep Nets☆205Updated 9 years ago
- An implementation for mnist center loss training and visualization☆75Updated 7 years ago
- Corrupted labels and label smoothing☆128Updated 7 years ago
- Random miniprojects with pytorch.☆171Updated 6 years ago
- Code for the NuerIPS'19 paper "Gate Decorator: Global Filter Pruning Method for Accelerating Deep Convolutional Neural Networks"☆196Updated 5 years ago
- ☆31Updated 7 years ago
- Keras + tensorflow experiments with knowledge distillation on EMNIST dataset☆34Updated 7 years ago
- A universal and efficient framework for training well-performing light net☆124Updated 7 years ago
- PyTorch Implementation of Weights Pruning☆185Updated 7 years ago
- Focal Loss of multi-classification in tensorflow☆80Updated 6 years ago
- Implementation and experiments for AdamW on Pytorch☆93Updated 5 years ago
- ☆169Updated 4 years ago
- Implementation of Data-free Knowledge Distillation for Deep Neural Networks (on arxiv!)☆81Updated 7 years ago
- TensorFlow Implementation of Deep Mutual Learning☆322Updated 6 years ago
- Large-Margin Softmax Loss, Angular Softmax Loss, Additive Margin Softmax, ArcFaceLoss And FocalLoss In Tensorflow☆106Updated 5 years ago
- Tensorflow code for Differentiable architecture search☆72Updated 6 years ago
- PyTorch implementation Large-Margin Softmax (L-Softmax) loss☆100Updated 7 years ago