sseung0703 / Zero-shot_Knowledge_DistillationLinks
Zero-Shot Knowledge Distillation in Deep Networks in ICML2019
☆49Updated 5 years ago
Alternatives and similar repositories for Zero-shot_Knowledge_Distillation
Users that are interested in Zero-shot_Knowledge_Distillation are comparing it to the libraries listed below
Sorting:
- ☆51Updated 5 years ago
- Knowledge Transfer via Distillation of Activation Boundaries Formed by Hidden Neurons (AAAI 2019)☆104Updated 5 years ago
- Zero-Shot Knowledge Distillation in Deep Networks☆67Updated 3 years ago
- Knowledge Distillation with Adversarial Samples Supporting Decision Boundary (AAAI 2019)☆71Updated 5 years ago
- Role-Wise Data Augmentation for Knowledge Distillation☆19Updated 2 years ago
- Improving generalization by controlling label-noise information in neural network weights.☆40Updated 4 years ago
- ☆57Updated 3 years ago
- Full implementation of the paper "Rethinking Softmax with Cross-Entropy: Neural Network Classifier as Mutual Information Estimator".☆101Updated 5 years ago
- Implementation of the Heterogeneous Knowledge Distillation using Information Flow Modeling method☆24Updated 5 years ago
- Codes for DATA: Differentiable ArchiTecture Approximation.☆11Updated 3 years ago
- Cheap distillation for convolutional neural networks.☆33Updated 6 years ago
- Source code accompanying our CVPR 2019 paper: "NetTailor: Tuning the architecture, not just the weights."☆53Updated 3 years ago
- Global Sparse Momentum SGD for pruning very deep neural networks☆44Updated 2 years ago
- ☆28Updated 5 years ago
- Implementation of several knowledge distillation techniques on PyTorch☆15Updated 6 years ago
- 3rd place solution for NeurIPS 2019 MicroNet challenge☆35Updated 5 years ago
- Reproducing VID in CVPR2019 (on working)☆20Updated 5 years ago
- ☆37Updated 4 years ago
- Learning Metrics from Teachers: Compact Networks for Image Embedding (CVPR19)☆76Updated 6 years ago
- This project is the Torch implementation of our accepted CVPR 2019 paper, Iterative Normalization: Beyond Standardization towards Effic…☆25Updated 4 years ago
- Official code for Group-Transformer (Scale down Transformer by Grouping Features for a Lightweight Character-level Language Model, COLING…☆25Updated 4 years ago
- Official repository for Big-Little Net☆58Updated 5 years ago
- Pytorch implemenation of "Learning Filter Basis for Convolutional Neural Network Compression" ICCV2019☆18Updated 3 years ago
- [ICLR 2021 Spotlight Oral] "Undistillable: Making A Nasty Teacher That CANNOT teach students", Haoyu Ma, Tianlong Chen, Ting-Kuei Hu, Che…☆81Updated 3 years ago
- Regularizing Meta-Learning via Gradient Dropout☆53Updated 5 years ago
- ☆23Updated 4 years ago
- Pytorch implementation of CVPR2021 paper: SuperMix: Supervising the Mixing Data Augmentation☆92Updated 3 years ago
- ☆25Updated 6 years ago
- The codes for recent knowledge distillation algorithms and benchmark results via TF2.0 low-level API☆110Updated 3 years ago
- ☆17Updated 6 years ago