WangYZ1608 / Knowledge-Distillation-via-ND
The official implementation for paper: Improving Knowledge Distillation via Regularizing Feature Norm and Direction
☆19Updated last year
Alternatives and similar repositories for Knowledge-Distillation-via-ND
Users that are interested in Knowledge-Distillation-via-ND are comparing it to the libraries listed below
Sorting:
- [CVPR-2022] Official implementation for "Knowledge Distillation with the Reused Teacher Classifier".☆94Updated 2 years ago
- ☆85Updated last year
- [CVPR 2023] This repository includes the official implementation our paper "Masked Autoencoders Enable Efficient Knowledge Distillers"☆106Updated last year
- Code for 'Multi-level Logit Distillation' (CVPR2023)☆63Updated 7 months ago
- PyTorch code and checkpoints release for VanillaKD: https://arxiv.org/abs/2305.15781☆75Updated last year
- Official implementation for paper "Knowledge Diffusion for Distillation", NeurIPS 2023☆84Updated last year
- Official implementation of paper "Masked Distillation with Receptive Tokens", ICLR 2023.☆68Updated 2 years ago
- ImageNet-1K data download, processing for using as a dataset☆96Updated 2 years ago
- [ECCV 2024] Isomorphic Pruning for Vision Models☆68Updated 9 months ago
- The official implementation of LumiNet: The Bright Side of Perceptual Knowledge Distillation https://arxiv.org/abs/2310.03669☆19Updated last year
- Official implementation of paper "Knowledge Distillation from A Stronger Teacher", NeurIPS 2022☆148Updated 2 years ago
- ☆26Updated last year
- [AAAI 2023] Official PyTorch Code for "Curriculum Temperature for Knowledge Distillation"☆173Updated 5 months ago
- CVPR 2023, Class Attention Transfer Based Knowledge Distillation☆44Updated last year
- Training ImageNet / CIFAR models with sota strategies and fancy techniques such as ViT, KD, Rep, etc.☆82Updated last year
- Official code for Scale Decoupled Distillation☆41Updated last year
- [CVPR-2024] Official implementations of CLIP-KD: An Empirical Study of CLIP Model Distillation☆111Updated 10 months ago
- [CVPR'24] Official implementation of paper "FreeKD: Knowledge Distillation via Semantic Frequency Prompt".☆43Updated last year
- PyTorch code and checkpoints release for OFA-KD: https://arxiv.org/abs/2310.19444☆122Updated last year
- [ICCV 23]An approach to enhance the efficiency of Vision Transformer (ViT) by concurrently employing token pruning and token merging tech…☆95Updated last year
- [NeurIPS'22] Projector Ensemble Feature Distillation☆29Updated last year
- 'NKD and USKD' (ICCV 2023) and 'ViTKD' (CVPRW 2024)☆228Updated last year
- Official PyTorch Code for "Is Synthetic Data From Diffusion Models Ready for Knowledge Distillation?" (https://arxiv.org/abs/2305.12954)☆47Updated last year
- 1.5−3.0× lossless training or pre-training speedup. An off-the-shelf, easy-to-implement algorithm for the efficient training of foundatio…☆221Updated 8 months ago
- Official PyTorch(MMCV) implementation of “Adversarial AutoMixup” (ICLR 2024 spotlight)☆68Updated 6 months ago
- Official code for our ECCV'22 paper "A Fast Knowledge Distillation Framework for Visual Recognition"☆188Updated last year
- [ECCV-2022] Official implementation of MixSKD: Self-Knowledge Distillation from Mixup for Image Recognition && Pytorch Implementations of…☆106Updated 2 years ago
- Zero-label image classification via OpenCLIP knowledge distillation☆125Updated last year
- [AAAI 2024] Understanding the Role of the Projector in Knowledge Distillation☆18Updated last year
- [ICCV 2023 & AAAI 2023] Binary Adapters & FacT, [Tech report] Convpass☆187Updated last year