tejasgodambe / knowledge-distillationLinks
Transfer knowledge from a large DNN or an ensemble of DNNs into a small DNN
☆28Updated 8 years ago
Alternatives and similar repositories for knowledge-distillation
Users that are interested in knowledge-distillation are comparing it to the libraries listed below
Sorting:
- PyTorch implementation of "SNAPSHOT ENSEMBLES: TRAIN 1, GET M FOR FREE" [WIP]☆36Updated 8 years ago
- This repository stores the files used for my summer internship's work on "teacher-student learning", an experimental method for training …☆47Updated 6 years ago
- Curriculum Learning - Tensorflow☆40Updated 7 years ago
- ☆51Updated 6 years ago
- A machine learning experiment☆180Updated 7 years ago
- Unofficial pytorch implementation of Born-Again Neural Networks.☆55Updated 4 years ago
- 6️⃣6️⃣6️⃣ Reproduce ICLR '18 under-reviewed paper "MULTI-TASK LEARNING ON MNIST IMAGE DATASETS"☆42Updated 7 years ago
- A Matlab implementation of the capsule networks (or capsnet).☆48Updated 7 years ago
- Xuhong Li, Yves Grandvalet, and Franck Davoine. "Explicit Inductive Bias for Transfer Learning with Convolutional Networks." In ICML 2018…☆56Updated 7 years ago
- Knowledge Distillation using Tensorflow☆142Updated 5 years ago
- CP and Tucker decomposition for Convolutional Neural Networks☆85Updated 7 years ago
- [ICML 2018] "Deep k-Means: Re-Training and Parameter Sharing with Harder Cluster Assignments for Compressing Deep Convolutions"☆152Updated 3 years ago
- resnet_cifar10_cifar100_imagenet☆13Updated 6 years ago
- Stochastic Downsampling for Cost-Adjustable Inference and Improved Regularization in Convolutional Networks☆18Updated 5 years ago
- Various implementations and experimentation for deep neural network model compression☆24Updated 6 years ago
- Cost-Effective Object Detection: Active Sample Mining with Switchable Selection Criteria☆12Updated 6 years ago
- PyTorch implementation of "Distilling the Knowledge in a Neural Network" for model compression☆58Updated 7 years ago
- Implementation of Data-free Knowledge Distillation for Deep Neural Networks (on arxiv!)☆81Updated 7 years ago
- Dual Path Networks on cifar-10 and fashion-mnist datasets☆18Updated 7 years ago
- Multi Task Learning Implementation with Homoscedastic Uncertainty in Tensorflow☆53Updated 6 years ago
- Exploring CNNs and model quantization on Caltech-256 dataset☆84Updated 7 years ago
- Applied Sparse regularization (L1), Weight decay regularization (L2), ElasticNet, GroupLasso and GroupSparseLasso to Neuronal Network.☆38Updated 3 years ago
- Pytorch implementation of "Fast Training of Triplet-based Deep Binary Embedding Networks".☆40Updated 7 years ago
- code for triplet GAN☆31Updated 7 years ago
- Repository for the Learning without Forgetting paper, ECCV 2016☆84Updated 5 years ago
- Zero-Shot Knowledge Distillation in Deep Networks in ICML2019☆49Updated 6 years ago
- Initial Code for the paper "incremental learning through deep adaptation"☆50Updated 6 years ago
- Model Compression Based on Geoffery Hinton's Logit Regression Method in Keras applied to MNIST 16x compression over 0.95 percent accuracy…☆61Updated 5 years ago
- Sparse Recurrent Neural Networks -- Pruning Connections and Hidden Sizes (TensorFlow)☆74Updated 5 years ago
- pytorch implementation of Structured Bayesian Pruning☆19Updated 7 years ago