penghui-yang / L2DLinks
[ICCV 2023] Multi-Label Knowledge Distillation
☆34Updated last year
Alternatives and similar repositories for L2D
Users that are interested in L2D are comparing it to the libraries listed below
Sorting:
- LiVT PyTorch Implementation.☆72Updated 2 years ago
- Official Pytorch implementation of "E2VPT: An Effective and Efficient Approach for Visual Prompt Tuning". (ICCV2023)☆71Updated last year
- ☆87Updated 2 years ago
- ☆35Updated last year
- [ECCV 2022] Implementation of the paper "Locality Guidance for Improving Vision Transformers on Tiny Datasets"☆80Updated 3 years ago
- [CVPR 2024] Offical implemention of the paper "DePT: Decoupled Prompt Tuning"☆107Updated 3 months ago
- [CVPR-2024] Official implementations of CLIP-KD: An Empirical Study of CLIP Model Distillation☆125Updated 2 weeks ago
- [CVPR 2023] This repository includes the official implementation our paper "Masked Autoencoders Enable Efficient Knowledge Distillers"☆107Updated 2 years ago
- [NeurIPS'23] DropPos: Pre-Training Vision Transformers by Reconstructing Dropped Positions☆61Updated last year
- Codes for ECCV2022 paper - contrastive deep supervision