This resposity maintains a collection of important papers on knowledge distillation (awesome-knowledge-distillation)).
☆84Mar 19, 2025Updated last year
Alternatives and similar repositories for Knowledge-Distillation-Paper
Users that are interested in Knowledge-Distillation-Paper are comparing it to the libraries listed below
Sorting:
- This is the implementation for the ICME-2023 paper (Adaptive Multi-Teacher Knowledge Distillation with Meta-Learning).☆34Apr 1, 2023Updated 2 years ago
- Multi-Teacher Knowledge Distillation, code for my PhD dissertation. I used knowledge distillation as a decision-fusion and compressing m…☆28May 19, 2023Updated 2 years ago
- Reading Papers☆14Mar 26, 2021Updated 4 years ago
- [AAAI-2021, TKDE-2023] Official implementation for "Cross-Layer Distillation with Semantic Calibration".☆78Jul 29, 2024Updated last year
- [AAAI 2023] Official PyTorch Code for "Curriculum Temperature for Knowledge Distillation"☆181Dec 3, 2024Updated last year
- ☆17Aug 1, 2025Updated 7 months ago
- Source Code for "Dual-Level Knowledge Distillation via Knowledge Alignment and Correlation", TNNLS, https://ieeexplore.ieee.org/abstract/…☆12Dec 21, 2022Updated 3 years ago
- ACCV2022 Source Code of paper "Feature Decoupled Knowledge Distillation via Spatial Pyramid Pooling"☆12Jul 5, 2023Updated 2 years ago
- This is the implementation for the ICASSP-2022 paper (Confidence-Aware Multi-Teacher Knowledge Distillation).☆63Feb 12, 2022Updated 4 years ago
- [IJCV 2022] Domain-Specific Bias Filtering for Single Labeled Domain Generalization☆12Nov 10, 2022Updated 3 years ago
- Awesome Knowledge-Distillation for CV☆93Apr 30, 2024Updated last year
- knowledge distillation papers☆766Feb 10, 2023Updated 3 years ago
- Less is More: Task-aware Layer-wise Distillation for Language Model Compression (ICML2023)☆40Aug 28, 2023Updated 2 years ago
- Gradients as Features for Deep Representation Learning☆43Mar 8, 2020Updated 6 years ago
- [NeurIPS'22] What Makes a "Good" Data Augmentation in Knowledge Distillation -- A Statistical Perspective☆37Dec 15, 2022Updated 3 years ago
- ☆14Apr 11, 2024Updated last year
- Pytorch implementation of various Knowledge Distillation (KD) methods.☆1,745Nov 25, 2021Updated 4 years ago
- Awesome Knowledge Distillation☆3,827Dec 25, 2025Updated 2 months ago
- [ICLR 2020] Contrastive Representation Distillation (CRD), and benchmark of recent knowledge distillation methods☆2,426Oct 16, 2023Updated 2 years ago
- PyTorch implementation for Channel Distillation☆103Jun 9, 2020Updated 5 years ago
- A benchmark suite for Scalable Diverse Model Selection for Accessible Transfer Learning from our NeurIPS 2021 paper.☆15Dec 14, 2022Updated 3 years ago
- The official implementation of [CVPR2022] Decoupled Knowledge Distillation https://arxiv.org/abs/2203.08679 and [ICCV2023] DOT: A Distill…☆895Nov 5, 2023Updated 2 years ago
- Adaptive Inter-Class Similarity Distillation for Semantic Segmentation (MTAP 2025)☆29Nov 14, 2025Updated 4 months ago
- Official implementation for (Refine Myself by Teaching Myself : Feature Refinement via Self-Knowledge Distillation, CVPR-2021)☆103Apr 30, 2024Updated last year
- Code to reproduce experiments from 'Does Knowledge Distillation Really Work' a paper which appeared in the 2021 NeurIPS proceedings.☆34Sep 27, 2023Updated 2 years ago
- CVPR 2023, Class Attention Transfer Based Knowledge Distillation☆46Jun 13, 2023Updated 2 years ago
- This is the code of CVPR'20 paper "Distilling Cross-Task Knowledge via Relationship Matching".☆49Feb 7, 2021Updated 5 years ago
- The official (TMLR) implementation of LumiNet: Perception-Driven Knowledge Distillation via Statistical Logit Calibration☆17Aug 17, 2025Updated 7 months ago
- Official Implementation of "Style Generator Inversion for Image Enhancement and Animation".☆13Dec 2, 2021Updated 4 years ago
- Code for the Paper 'On the Connection Between Adversarial Robustness and Saliency Map Interpretability' by C. Etmann, S. Lunz, P. Maass, …☆16May 9, 2019Updated 6 years ago
- Knowledge Transfer via Distillation of Activation Boundaries Formed by Hidden Neurons (AAAI 2019)☆106Sep 9, 2019Updated 6 years ago
- Official implementation for (Show, Attend and Distill: Knowledge Distillation via Attention-based Feature Matching, AAAI-2021)☆119Feb 9, 2021Updated 5 years ago
- A pytorch implementation of paper 'Be Your Own Teacher: Improve the Performance of Convolutional Neural Networks via Self Distillation', …☆181Jan 29, 2022Updated 4 years ago
- Qimera: Data-free Quantization with Synthetic Boundary Supporting Samples [NeurIPS 2021]☆34Dec 12, 2021Updated 4 years ago
- MCPL: MULTI-CONCEPT PROMPT LEARNING☆20May 27, 2024Updated last year
- Code for the paper: Learning Adversarially Robust Representations via Worst-Case Mutual Information Maximization (https://arxiv.org/abs/2…☆23Nov 23, 2020Updated 5 years ago
- ☆19Jun 26, 2021Updated 4 years ago
- Code for "Self-Distillation as Instance-Specific Label Smoothing"☆16Oct 22, 2020Updated 5 years ago
- The official implement of paper S2-VER: Semi-Supervised Visual Emotion Recognition☆11Apr 28, 2024Updated last year