DefangChen / Knowledge-Distillation-PaperLinks
This resposity maintains a collection of important papers on knowledge distillation (awesome-knowledge-distillation)).
☆78Updated 2 months ago
Alternatives and similar repositories for Knowledge-Distillation-Paper
Users that are interested in Knowledge-Distillation-Paper are comparing it to the libraries listed below
Sorting:
- [CVPR-2022] Official implementation for "Knowledge Distillation with the Reused Teacher Classifier".☆94Updated 2 years ago
- [AAAI-2021, TKDE-2023] Official implementation for "Cross-Layer Distillation with Semantic Calibration".☆75Updated 10 months ago
- Code for 'Multi-level Logit Distillation' (CVPR2023)☆64Updated 8 months ago
- Official implementation for (Show, Attend and Distill: Knowledge Distillation via Attention-based Feature Matching, AAAI-2021)☆117Updated 4 years ago
- Official PyTorch implementation of "Dataset Condensation via Efficient Synthetic-Data Parameterization" (ICML'22)☆113Updated last year
- Official implementation of paper "Knowledge Distillation from A Stronger Teacher", NeurIPS 2022☆146Updated 2 years ago
- Official PyTorch implementation of PS-KD☆87Updated 2 years ago
- Official implementation for paper "Knowledge Diffusion for Distillation", NeurIPS 2023☆86Updated last year
- The code of the paper "Minimizing the Accumulated Trajectory Error to Improve Dataset Distillation" (CVPR2023)☆40Updated 2 years ago
- [IJCAI-2021] Contrastive Model Inversion for Data-Free Knowledge Distillation☆72Updated 3 years ago
- Efficient Dataset Distillation by Representative Matching☆113Updated last year
- [ICLR'23] Trainability Preserving Neural Pruning (PyTorch)☆32Updated 2 years ago
- Awesome Knowledge-Distillation for CV☆86Updated last year
- A simple reimplement Online Knowledge Distillation via Collaborative Learning with pytorch☆48Updated 2 years ago
- [AAAI 2023] Official PyTorch Code for "Curriculum Temperature for Knowledge Distillation"☆174Updated 6 months ago
- ☆30Updated 3 years ago
- [NeurIPS-2021] Mosaicking to Distill: Knowledge Distillation from Out-of-Domain Data☆44Updated 2 years ago
- [AAAI-2022] Up to 100x Faster Data-free Knowledge Distillation☆70Updated 2 years ago
- Code and pretrained models for paper: Data-Free Adversarial Distillation☆99Updated 2 years ago
- ICLR 2024, Towards Lossless Dataset Distillation via Difficulty-Aligned Trajectory Matching☆101Updated last year
- A pytorch implementation of paper 'Be Your Own Teacher: Improve the Performance of Convolutional Neural Networks via Self Distillation', …☆176Updated 3 years ago
- [CVPR 2023] This repository includes the official implementation our paper "Masked Autoencoders Enable Efficient Knowledge Distillers"☆106Updated last year
- ☆26Updated last year
- In progress.☆64Updated last year
- Densely Guided Knowledge Distillation using Multiple Teacher Assistants☆11Updated 3 years ago
- [NeurIPS'22] What Makes a "Good" Data Augmentation in Knowledge Distillation -- A Statistical Perspective☆37Updated 2 years ago
- Official implementation for CVPR'23 paper "BlackVIP: Black-Box Visual Prompting for Robust Transfer Learning"☆110Updated last year
- PyTorch code and checkpoints release for OFA-KD: https://arxiv.org/abs/2310.19444☆125Updated last year
- ☆12Updated last year
- ZSKD with PyTorch☆31Updated last year