DefangChen / Knowledge-Distillation-PaperLinks
This resposity maintains a collection of important papers on knowledge distillation (awesome-knowledge-distillation)).
☆82Updated 10 months ago
Alternatives and similar repositories for Knowledge-Distillation-Paper
Users that are interested in Knowledge-Distillation-Paper are comparing it to the libraries listed below
Sorting:
- [CVPR-2022] Official implementation for "Knowledge Distillation with the Reused Teacher Classifier".☆102Updated 3 years ago
- Official implementation of paper "Knowledge Distillation from A Stronger Teacher", NeurIPS 2022☆155Updated 3 years ago
- Official PyTorch implementation of PS-KD☆89Updated 3 years ago
- Official implementation for paper "Knowledge Diffusion for Distillation", NeurIPS 2023☆94Updated 2 years ago
- Code for 'Multi-level Logit Distillation' (CVPR2023)☆71Updated last year
- Awesome Knowledge-Distillation for CV☆92Updated last year
- [AAAI-2021, TKDE-2023] Official implementation for "Cross-Layer Distillation with Semantic Calibration".☆78Updated last year
- PyTorch implementation of paper "Dataset Distillation via Factorization" in NeurIPS 2022.☆67Updated 3 years ago
- Official PyTorch implementation of "Dataset Condensation via Efficient Synthetic-Data Parameterization" (ICML'22)☆116Updated 2 years ago
- [AAAI 2023] Official PyTorch Code for "Curriculum Temperature for Knowledge Distillation"☆182Updated last year
- Official PyTorch(MMCV) implementation of “Adversarial AutoMixup” (ICLR 2024 spotlight)☆71Updated last year
- Efficient Dataset Distillation by Representative Matching☆113Updated last year
- The code of the paper "Minimizing the Accumulated Trajectory Error to Improve Dataset Distillation" (CVPR2023)☆40Updated 2 years ago
- Official implementation for (Show, Attend and Distill: Knowledge Distillation via Attention-based Feature Matching, AAAI-2021)☆118Updated 4 years ago
- ICLR 2024, Towards Lossless Dataset Distillation via Difficulty-Aligned Trajectory Matching☆105Updated last year
- [IJCAI-2021] Contrastive Model Inversion for Data-Free Knowledge Distillation☆73Updated 3 years ago
- [NeurIPS'22] This is an official implementation for "Scaling & Shifting Your Features: A New Baseline for Efficient Model Tuning".☆192Updated 2 years ago
- (NeurIPS 2023 spotlight) Large-scale Dataset Distillation/Condensation, 50 IPC (Images Per Class) achieves the highest 60.8% on original …☆136Updated last year
- Official implementation for CVPR'23 paper "BlackVIP: Black-Box Visual Prompting for Robust Transfer Learning"☆109Updated 2 years ago
- [ECCV 2022] A generalized long-tailed challenge that incorporates both the conventional class-wise imbalance and the overlooked attribute…☆125Updated 3 weeks ago
- ☆28Updated 2 years ago
- [NeurIPS'22] What Makes a "Good" Data Augmentation in Knowledge Distillation -- A Statistical Perspective☆37Updated 3 years ago
- Learning Efficient Vision Transformers via Fine-Grained Manifold Distillation. NeurIPS 2022.☆34Updated 3 years ago
- [CVPR 2023] This repository includes the official implementation our paper "Masked Autoencoders Enable Efficient Knowledge Distillers"☆109Updated 2 years ago
- 'NKD and USKD' (ICCV 2023) and 'ViTKD' (CVPRW 2024)☆242Updated 2 years ago
- [ICCV 2023 oral] This is the official repository for our paper: ''Sensitivity-Aware Visual Parameter-Efficient Fine-Tuning''.☆75Updated 2 years ago
- [CVPR2024] Efficient Dataset Distillation via Minimax Diffusion☆104Updated last year
- PyTorch code and checkpoints release for OFA-KD: https://arxiv.org/abs/2310.19444☆135Updated last year
- Official Implementation of Robust Training under Label Noise by Over-parameterization☆66Updated 3 years ago
- [ICCV2023] Dataset Quantization☆263Updated 2 years ago