DushyantaDhyani / kdtf
Knowledge Distillation using Tensorflow
☆142Updated 5 years ago
Alternatives and similar repositories for kdtf
Users that are interested in kdtf are comparing it to the libraries listed below
Sorting:
- Knowledge distillation methods implemented with Tensorflow (now there are 11 (+1) methods, and will be added more.)☆264Updated 5 years ago
- Implementation of model compression with knowledge distilling method.☆343Updated 8 years ago
- A machine learning experiment☆180Updated 7 years ago
- Teaches a student network from the knowledge obtained via training of a larger teacher network☆158Updated 7 years ago
- Network acceleration methods☆177Updated 3 years ago
- Implements quantized distillation. Code for our paper "Model compression via distillation and quantization"☆333Updated 9 months ago
- Knowledge Transfer via Distillation of Activation Boundaries Formed by Hidden Neurons (AAAI 2019)☆104Updated 5 years ago
- a list of awesome papers on deep model ompression and acceleration☆351Updated 3 years ago
- Using Teacher Assistants to Improve Knowledge Distillation: https://arxiv.org/pdf/1902.03393.pdf☆256Updated 5 years ago
- Keras + tensorflow experiments with knowledge distillation on EMNIST dataset☆34Updated 7 years ago
- The codes for recent knowledge distillation algorithms and benchmark results via TF2.0 low-level API☆109Updated 3 years ago
- FitNets: Hints for Thin Deep Nets☆207Updated 10 years ago
- PyTorch implementation of "Distilling the Knowledge in a Neural Network" for model compression☆58Updated 7 years ago
- Corrupted labels and label smoothing☆129Updated 7 years ago
- Tensorflow code for Differentiable architecture search☆72Updated 6 years ago
- Papers for deep neural network compression and acceleration☆397Updated 3 years ago
- Code for the NuerIPS'19 paper "Gate Decorator: Global Filter Pruning Method for Accelerating Deep Convolutional Neural Networks"☆196Updated 5 years ago
- A universal and efficient framework for training well-performing light net☆124Updated 7 years ago
- I demonstrate how to compress a neural network using pruning in tensorflow.☆78Updated 7 years ago
- Single-Path NAS: Designing Hardware-Efficient ConvNets in less than 4 Hours☆395Updated 4 years ago
- ☆169Updated 4 years ago
- Random miniprojects with pytorch.☆171Updated 6 years ago
- Neural architecture search(NAS)☆14Updated 6 years ago
- Implementation of Data-free Knowledge Distillation for Deep Neural Networks (on arxiv!)☆81Updated 7 years ago
- Label Refinery: Improving ImageNet Classification through Label Progression☆278Updated 6 years ago
- Converting A PyTorch Model to Tensorflow pb using ONNX☆160Updated 5 years ago
- Bridging the gap Between Stability and Scalability in Neural Architecture Search☆141Updated 3 years ago
- A large scale study of Knowledge Distillation.☆220Updated 5 years ago
- Focal Loss of multi-classification in tensorflow☆80Updated 6 years ago
- my simple tutorial for mxnet, a fast deep learning framework☆103Updated 6 years ago