TropComplique / knowledge-distillation-keras
A machine learning experiment
☆180Updated 7 years ago
Alternatives and similar repositories for knowledge-distillation-keras:
Users that are interested in knowledge-distillation-keras are comparing it to the libraries listed below
- Knowledge Distillation using Tensorflow☆142Updated 5 years ago
- Knowledge distillation methods implemented with Tensorflow (now there are 11 (+1) methods, and will be added more.)☆264Updated 5 years ago
- Implementation of model compression with knowledge distilling method.☆343Updated 8 years ago
- An implementation of "mixup: Beyond Empirical Risk Minimization"☆285Updated 7 years ago
- PyTorch Implementation of Weights Pruning☆185Updated 7 years ago
- a list of awesome papers on deep model ompression and acceleration☆351Updated 3 years ago
- Code for the NuerIPS'19 paper "Gate Decorator: Global Filter Pruning Method for Accelerating Deep Convolutional Neural Networks"☆196Updated 5 years ago
- ShuffleNet Implementation using Keras Functional Framework 2.0☆77Updated 4 years ago
- Pytorch version for weight pruning for Murata Group's CREST project☆57Updated 6 years ago
- Random miniprojects with pytorch.☆171Updated 6 years ago
- Keras + tensorflow experiments with knowledge distillation on EMNIST dataset☆34Updated 7 years ago
- Blog https://medium.com/neuralmachine/knowledge-distillation-dc241d7c2322☆60Updated 6 years ago
- Code for https://arxiv.org/abs/1810.04622☆140Updated 5 years ago
- Papers for deep neural network compression and acceleration☆397Updated 3 years ago
- Using Teacher Assistants to Improve Knowledge Distillation: https://arxiv.org/pdf/1902.03393.pdf☆257Updated 5 years ago
- Model Compression Based on Geoffery Hinton's Logit Regression Method in Keras applied to MNIST 16x compression over 0.95 percent accuracy…☆61Updated 5 years ago
- The codes for recent knowledge distillation algorithms and benchmark results via TF2.0 low-level API☆109Updated 3 years ago
- TensorFlow implementations of visualization of convolutional neural networks, such as Grad-Class Activation Mapping and guided back propa…☆196Updated 6 years ago
- I demonstrate how to compress a neural network using pruning in tensorflow.☆78Updated 7 years ago
- PyTorch implementation of "Distilling the Knowledge in a Neural Network" for model compression☆58Updated 7 years ago
- A simpler version of the self-attention layer from SAGAN, and some image classification results.☆212Updated 5 years ago
- A PyTorch implementation of the paper Mixup: Beyond Empirical Risk Minimization in PyTorch☆123Updated 7 years ago
- An implementation for mnist center loss training and visualization☆75Updated 7 years ago
- Implementation of ResNeXt models from the paper Aggregated Residual Transformations for Deep Neural Networks in Keras 2.0+.☆224Updated 4 years ago
- Corrupted labels and label smoothing☆128Updated 7 years ago
- Focal Loss of multi-classification in tensorflow☆80Updated 6 years ago
- Keras model convolutional filter pruning package☆44Updated 6 years ago
- Implements quantized distillation. Code for our paper "Model compression via distillation and quantization"☆332Updated 9 months ago
- Transfer knowledge from a large DNN or an ensemble of DNNs into a small DNN☆28Updated 7 years ago
- Label Refinery: Improving ImageNet Classification through Label Progression☆279Updated 6 years ago