shriramsb / Distilling-the-Knowledge-in-a-Neural-NetworkLinks
Demonstration of transfer of knowledge and generalization with distillation
☆54Updated 6 years ago
Alternatives and similar repositories for Distilling-the-Knowledge-in-a-Neural-Network
Users that are interested in Distilling-the-Knowledge-in-a-Neural-Network are comparing it to the libraries listed below
Sorting:
- This resposity maintains a collection of important papers on knowledge distillation (awesome-knowledge-distillation)).☆78Updated 2 months ago
- Official implementation for (Show, Attend and Distill: Knowledge Distillation via Attention-based Feature Matching, AAAI-2021)☆117Updated 4 years ago
- A pytorch implementation of paper 'Be Your Own Teacher: Improve the Performance of Convolutional Neural Networks via Self Distillation', …☆176Updated 3 years ago
- PyTorch implementation of "Distilling the Knowledge in a Neural Network"☆67Updated 3 years ago
- ☆126Updated 4 years ago
- A simple reimplement Online Knowledge Distillation via Collaborative Learning with pytorch☆48Updated 2 years ago
- [AAAI-2020] Official implementation for "Online Knowledge Distillation with Diverse Peers".☆74Updated last year
- ☆27Updated 4 years ago
- Regularizing Class-wise Predictions via Self-knowledge Distillation (CVPR 2020)☆107Updated 4 years ago
- [ICLR-2020] Dynamic Sparse Training: Find Efficient Sparse Network From Scratch With Trainable Masked Layers.☆31Updated 5 years ago
- This is the implementation for the ICASSP-2022 paper (Confidence-Aware Multi-Teacher Knowledge Distillation).☆58Updated 3 years ago
- Reproducing VID in CVPR2019 (on working)☆20Updated 5 years ago
- [CVPR-2022] Official implementation for "Knowledge Distillation with the Reused Teacher Classifier".☆95Updated 2 years ago
- Code for Paper "Self-Distillation from the Last Mini-Batch for Consistency Regularization"☆40Updated 2 years ago
- Data-Free Network Quantization With Adversarial Knowledge Distillation PyTorch☆29Updated 3 years ago
- [AAAI-2022] Up to 100x Faster Data-free Knowledge Distillation☆70Updated 2 years ago
- Feature Fusion for Online Mutual Knowledge Distillation Code☆26Updated 4 years ago
- IJCAI 2021, "Comparing Kullback-Leibler Divergence and Mean Squared Error Loss in Knowledge Distillation"☆41Updated 2 years ago
- This is the implementation for the ICME-2023 paper (Adaptive Multi-Teacher Knowledge Distillation with Meta-Learning).☆25Updated 2 years ago
- Vision Transformer Pruning☆57Updated 3 years ago
- [AAAI-2021, TKDE-2023] Official implementation for "Cross-Layer Distillation with Semantic Calibration".☆75Updated 10 months ago
- The implementation of AAAI 2021 Paper: "Progressive Network Grafting for Few-Shot Knowledge Distillation".☆32Updated 10 months ago
- Benchmarking various Computer Vision models on TinyImageNet Dataset☆29Updated 2 years ago
- [ECCV2020] Knowledge Distillation Meets Self-Supervision☆237Updated 2 years ago
- [IJCAI-2021] Contrastive Model Inversion for Data-Free Knowledge Distillation☆72Updated 3 years ago
- NeurIPS 2021, "Fine Samples for Learning with Noisy Labels"☆39Updated 3 years ago
- Official PyTorch implementation of PS-KD☆87Updated 2 years ago
- ☆34Updated last year
- Code and pretrained models for paper: Data-Free Adversarial Distillation☆99Updated 2 years ago
- ZSKD with PyTorch☆31Updated last year