OSVAI / SSD-KDLinks
The official project website of "Small Scale Data-Free Knowledge Distillation" (SSD-KD for short, published in CVPR 2024).
☆21Updated last year
Alternatives and similar repositories for SSD-KD
Users that are interested in SSD-KD are comparing it to the libraries listed below
Sorting:
- Code for 'Multi-level Logit Distillation' (CVPR2023)☆71Updated last year
- PyTorch code and checkpoints release for OFA-KD: https://arxiv.org/abs/2310.19444☆135Updated last year
- [AAAI 2023] Official PyTorch Code for "Curriculum Temperature for Knowledge Distillation"☆182Updated last year
- [ICCV23] Robust Mixture-of-Expert Training for Convolutional Neural Networks by Yihua Zhang, Ruisi Cai, Tianlong Chen, Guanhua Zhang, Hua…☆67Updated 2 years ago
- Awesome Knowledge-Distillation for CV☆92Updated last year
- [CVPR2023] Federated Incremental Semantic Segmentation☆40Updated 2 years ago
- [CVPR-2024] Official implementations of CLIP-KD: An Empirical Study of CLIP Model Distillation☆143Updated 5 months ago
- ☆28Updated 2 years ago
- This resposity maintains a collection of important papers on knowledge distillation (awesome-knowledge-distillation)).☆82Updated 10 months ago
- The official implementation of "Asymmetric Patch Sampling for Contrastive Learning"☆55Updated 7 months ago
- CVPR 2023, Class Attention Transfer Based Knowledge Distillation☆46Updated 2 years ago
- [CVPR-2022] Official implementation for "Knowledge Distillation with the Reused Teacher Classifier".☆102Updated 3 years ago
- Efficient Dataset Distillation by Representative Matching☆113Updated last year
- [CVPR '25] Official implementation of the paper "Rethinking Few-Shot Adaptation of Vision-Language Models in Two Stages", CVPR 2025.☆29Updated 10 months ago
- 'NKD and USKD' (ICCV 2023) and 'ViTKD' (CVPRW 2024)☆243Updated 2 years ago
- [ICLR 2024] ViDA: Homeostatic Visual Domain Adapter for Continual Test Time Adaptation☆71Updated last year
- Official implementation for "Knowledge Distillation with Refined Logits".☆21Updated last year
- [CVPR'24] Official implementation of paper "FreeKD: Knowledge Distillation via Semantic Frequency Prompt".☆50Updated last year
- [Survey] Awesome List of Mixup Augmentation and Beyond (https://arxiv.org/abs/2409.05202)☆164Updated last year
- ☆10Updated 2 years ago
- Code for ICLR 2023 paper (Oral) — Towards Stable Test-Time Adaptation in Dynamic Wild World☆201Updated 2 years ago
- [CVPR2024] Efficient Dataset Distillation via Minimax Diffusion☆104Updated last year
- Code of Data-Free Knowledge Distillation via Feature Exchange and Activation Region Constraint☆21Updated 2 years ago
- Official implementation of paper "Knowledge Distillation from A Stronger Teacher", NeurIPS 2022☆155Updated 3 years ago
- Knowledge Amalgamation, Multi-teacher KD, Ensemble KD☆12Updated 4 years ago
- Official code for Scale Decoupled Distillation☆77Updated last year
- Multimodal Large Language Model (MLLM) Tuning Survey: Keeping Yourself is Important in Downstream Tuning Multimodal Large Language Model☆94Updated 6 months ago
- Practical Continual Forgetting for Pre-trained Vision Models (CVPR 2024; T-PAMI 2026)☆70Updated 3 weeks ago
- Official PyTorch implementation of PS-KD☆89Updated 3 years ago
- ☆28Updated 2 years ago