zhengli97 / CTKDLinks
[AAAI 2023] Official PyTorch Code for "Curriculum Temperature for Knowledge Distillation"
☆176Updated 6 months ago
Alternatives and similar repositories for CTKD
Users that are interested in CTKD are comparing it to the libraries listed below
Sorting:
- 'NKD and USKD' (ICCV 2023) and 'ViTKD' (CVPRW 2024)☆233Updated last year
- Official implementation of paper "Knowledge Distillation from A Stronger Teacher", NeurIPS 2022☆146Updated 2 years ago
- Code for 'Multi-level Logit Distillation' (CVPR2023)☆65Updated 9 months ago
- PyTorch code and checkpoints release for OFA-KD: https://arxiv.org/abs/2310.19444☆125Updated last year
- [CVPR-2022] Official implementation for "Knowledge Distillation with the Reused Teacher Classifier".☆97Updated 3 years ago
- ☆85Updated last year
- CVPR 2023, Class Attention Transfer Based Knowledge Distillation☆44Updated 2 years ago
- [ECCV-2022] Official implementation of MixSKD: Self-Knowledge Distillation from Mixup for Image Recognition && Pytorch Implementations of…☆108Updated 2 years ago
- [CVPR'24] Official implementation of paper "FreeKD: Knowledge Distillation via Semantic Frequency Prompt".☆45Updated last year
- PyTorch code and checkpoints release for VanillaKD: https://arxiv.org/abs/2305.15781☆75Updated last year
- Official implementation of paper "Masked Distillation with Receptive Tokens", ICLR 2023.☆69Updated 2 years ago
- Official code for Scale Decoupled Distillation☆41Updated last year
- Masked Generative Distillation (ECCV 2022)☆225Updated 2 years ago
- Official PyTorch implementation of PS-KD☆88Updated 2 years ago
- [PR 2024] Official PyTorch Code for "Dual Teachers for Self-Knowledge Distillation"☆12Updated 6 months ago
- [AAAI-2021, TKDE-2023] Official implementation for "Cross-Layer Distillation with Semantic Calibration".☆75Updated 10 months ago
- ☆126Updated 4 years ago
- Official implementation for paper "Knowledge Diffusion for Distillation", NeurIPS 2023☆88Updated last year
- Training ImageNet / CIFAR models with sota strategies and fancy techniques such as ViT, KD, Rep, etc.☆82Updated last year
- The official implementation for paper: Improving Knowledge Distillation via Regularizing Feature Norm and Direction☆21Updated last year
- Official implementation for (Show, Attend and Distill: Knowledge Distillation via Attention-based Feature Matching, AAAI-2021)☆117Updated 4 years ago
- [CVPR-2024] Official implementations of CLIP-KD: An Empirical Study of CLIP Model Distillation☆119Updated 11 months ago
- The official PyTorch implementation of CHEX: CHannel EXploration for CNN Model Compression (CVPR 2022). Paper is available at https://ope…☆38Updated 2 years ago
- The offical implementation of [NeurIPS2024] Wasserstein Distance Rivals Kullback-Leibler Divergence for Knowledge Distillation https://ar…☆39Updated 6 months ago
- CVPR2022, BatchFormer: Learning to Explore Sample Relationships for Robust Representation Learning, https://arxiv.org/abs/2203.01522☆252Updated 2 years ago
- Distilling Knowledge via Knowledge Review, CVPR 2021☆272Updated 2 years ago
- Official code for our ECCV'22 paper "A Fast Knowledge Distillation Framework for Visual Recognition"☆188Updated last year
- ☆26Updated last year
- Learning Efficient Vision Transformers via Fine-Grained Manifold Distillation. NeurIPS 2022.☆32Updated 2 years ago
- [NeurIPS'22] Projector Ensemble Feature Distillation☆29Updated last year