roymiles / Simple-Recipe-Distillation
[AAAI 2024] Understanding the Role of the Projector in Knowledge Distillation
☆13Updated 7 months ago
Related projects: ⓘ
- Official implementation for paper "Knowledge Diffusion for Distillation", NeurIPS 2023☆72Updated 7 months ago
- [CVPR 2024] VkD : Improving Knowledge Distillation using Orthogonal Projections☆39Updated 2 weeks ago
- [ICCV 2023] Binary Adapters, [AAAI 2023] FacT, [Tech report] Convpass☆167Updated last year
- [ICCV 2023 oral] This is the official repository for our paper: ''Sensitivity-Aware Visual Parameter-Efficient Fine-Tuning''.☆60Updated 11 months ago
- [CVPR 2023] This repository includes the official implementation our paper "Masked Autoencoders Enable Efficient Knowledge Distillers"☆97Updated last year
- Code for 'Multi-level Logit Distillation' (CVPR2023)☆49Updated last year
- Official implementation of AdaViT☆48Updated 2 years ago
- [ICCV 23]An approach to enhance the efficiency of Vision Transformer (ViT) by concurrently employing token pruning and token merging tech…☆85Updated last year
- Adaptive Token Sampling for Efficient Vision Transformers (ECCV 2022 Oral Presentation)☆90Updated 4 months ago
- Official code for Scale Decoupled Distillation☆29Updated 5 months ago
- [NeurIPS'23] DropPos: Pre-Training Vision Transformers by Reconstructing Dropped Positions☆59Updated 4 months ago
- This repository is the official implementation of our Autoregressive Pretraining with Mamba in Vision☆53Updated 3 months ago
- ☆55Updated last year
- Official code for Cumulative Spatial Knowledge Distillation for Vision Transformers (ICCV-2023) https://openaccess.thecvf.com/content/ICC…☆13Updated 10 months ago
- [CVPR-2024] Official implementations of CLIP-KD: An Empirical Study of CLIP Model Distillation☆57Updated 2 months ago
- Official implementation of the paper "Masked Autoencoders are Efficient Class Incremental Learners"☆36Updated 3 months ago
- [CVPR2024] Efficient Dataset Distillation via Minimax Diffusion☆70Updated 5 months ago
- [ECCV 2024] Isomorphic Pruning for Vision Models☆46Updated last month
- ☆49Updated last year
- ImageNet-1K data download, processing for using as a dataset☆55Updated last year
- The official implementation for paper: Improving Knowledge Distillation via Regularizing Feature Norm and Direction☆3Updated last year
- Official implementation of paper "Knowledge Distillation from A Stronger Teacher", NeurIPS 2022☆133Updated last year
- Official PyTorch implementation of Which Tokens to Use? Investigating Token Reduction in Vision Transformers presented at ICCV 2023 NIVT …☆29Updated last year
- ☆22Updated last year
- Project Page for "Multi-Task Dense Prediction via Mixture of Low-Rank Experts"☆48Updated 4 months ago
- Official PyTorch(MMCV) implementation of “Adversarial AutoMixup” (ICLR 2024 spotlight)☆54Updated 2 weeks ago
- CVPR 2023, Class Attention Transfer Based Knowledge Distillation☆32Updated last year
- ☆79Updated last year
- [CVPR 2023] Castling-ViT: Compressing Self-Attention via Switching Towards Linear-Angular Attention During Vision Transformer Inference☆25Updated 6 months ago
- [ICLR 2024] ViDA: Homeostatic Visual Domain Adapter for Continual Test Time Adaptation☆49Updated 4 months ago