DefangChen / Knowledge-Distillation-Paper
This resposity maintains a collection of important papers on knowledge distillation (awesome-knowledge-distillation)).
☆74Updated last month
Alternatives and similar repositories for Knowledge-Distillation-Paper:
Users that are interested in Knowledge-Distillation-Paper are comparing it to the libraries listed below
- [CVPR-2022] Official implementation for "Knowledge Distillation with the Reused Teacher Classifier".☆92Updated 2 years ago
- Code for 'Multi-level Logit Distillation' (CVPR2023)☆58Updated 3 months ago
- Official implementation for paper "Knowledge Diffusion for Distillation", NeurIPS 2023☆79Updated 11 months ago
- Official PyTorch implementation of PS-KD☆83Updated 2 years ago
- Official implementation of paper "Knowledge Distillation from A Stronger Teacher", NeurIPS 2022☆140Updated 2 years ago
- Official PyTorch implementation of "Dataset Condensation via Efficient Synthetic-Data Parameterization" (ICML'22)☆107Updated last year
- [IJCAI-2021] Contrastive Model Inversion for Data-Free Knowledge Distillation☆68Updated 2 years ago
- Official implementation for (Show, Attend and Distill: Knowledge Distillation via Attention-based Feature Matching, AAAI-2021)☆116Updated 3 years ago
- [AAAI-2021, TKDE-2023] Official implementation for "Cross-Layer Distillation with Semantic Calibration".☆74Updated 5 months ago
- The code of the paper "Minimizing the Accumulated Trajectory Error to Improve Dataset Distillation" (CVPR2023)☆40Updated last year
- In progress.☆64Updated 9 months ago
- [AAAI-2022] Up to 100x Faster Data-free Knowledge Distillation☆67Updated 2 years ago
- [AAAI 2023] Official PyTorch Code for "Curriculum Temperature for Knowledge Distillation"☆163Updated last month
- Efficient Dataset Distillation by Representative Matching☆110Updated 10 months ago
- Official implementation for CVPR'23 paper "BlackVIP: Black-Box Visual Prompting for Robust Transfer Learning"☆106Updated last year
- Densely Guided Knowledge Distillation using Multiple Teacher Assistants☆8Updated 3 years ago
- Awesome Knowledge-Distillation for CV☆73Updated 8 months ago
- ☆42Updated last year
- ☆60Updated last year
- [CVPR 2023] This repository includes the official implementation our paper "Masked Autoencoders Enable Efficient Knowledge Distillers"☆103Updated last year
- ICLR 2024, Towards Lossless Dataset Distillation via Difficulty-Aligned Trajectory Matching☆95Updated 7 months ago
- [CVPR-22] This is the official implementation of the paper "Adavit: Adaptive vision transformers for efficient image recognition".☆50Updated 2 years ago
- CVPR 2023, Class Attention Transfer Based Knowledge Distillation☆37Updated last year
- Code for Paper "Self-Distillation from the Last Mini-Batch for Consistency Regularization"☆40Updated 2 years ago
- [ICLR 2022] The Unreasonable Effectiveness of Random Pruning: Return of the Most Naive Baseline for Sparse Training by Shiwei Liu, Tianlo…☆73Updated 2 years ago
- [NeurIPS'22] What Makes a "Good" Data Augmentation in Knowledge Distillation -- A Statistical Perspective☆36Updated 2 years ago
- Official PyTorch(MMCV) implementation of “Adversarial AutoMixup” (ICLR 2024 spotlight)☆64Updated 2 months ago
- (NeurIPS 2023 spotlight) Large-scale Dataset Distillation/Condensation, 50 IPC (Images Per Class) achieves the highest 60.8% on original …☆125Updated 2 months ago
- [NeurIPS 2022] Make Sharpness-Aware Minimization Stronger: A Sparsified Perturbation Approach -- Official Implementation☆44Updated last year
- [NeurIPS'22] This is an official implementation for "Scaling & Shifting Your Features: A New Baseline for Efficient Model Tuning".☆176Updated last year