guoyang9 / PELALinks
PELA: Learning Parameter-Efficient Models with Low-Rank Approximation [CVPR 2024]
☆19Updated last year
Alternatives and similar repositories for PELA
Users that are interested in PELA are comparing it to the libraries listed below
Sorting:
- [ICCV 2023 oral] This is the official repository for our paper: ''Sensitivity-Aware Visual Parameter-Efficient Fine-Tuning''.☆75Updated 2 years ago
- [CVPR 2024] VkD : Improving Knowledge Distillation using Orthogonal Projections☆57Updated last year
- Adapters Strike Back (CVPR 2024)☆39Updated last year
- 🔥 🔥 [WACV2024] Mini but Mighty: Finetuning ViTs with Mini Adapters☆19Updated last year
- Official implementation for paper "Knowledge Diffusion for Distillation", NeurIPS 2023☆94Updated last year
- The official project website of "NORM: Knowledge Distillation via N-to-One Representation Matching" (The paper of NORM is published in IC…☆20Updated 2 years ago
- [CVPR-22] This is the official implementation of the paper "Adavit: Adaptive vision transformers for efficient image recognition".☆55Updated 3 years ago
- Official PyTorch implementation of Which Tokens to Use? Investigating Token Reduction in Vision Transformers presented at ICCV 2023 NIVT …☆35Updated 2 years ago
- CVPR2024: Dual Memory Networks: A Versatile Adaptation Approach for Vision-Language Models☆89Updated last year
- ☆48Updated 2 years ago
- [BMVC 2022] Information Theoretic Representation Distillation☆19Updated 2 years ago
- Official implement of Evo-ViT: Slow-Fast Token Evolution for Dynamic Vision Transformer☆75Updated 3 years ago
- ☆34Updated 2 years ago
- [ICCV 2023 & AAAI 2023] Binary Adapters & FacT, [Tech report] Convpass☆197Updated 2 years ago
- [CVPR 2024] Offical implemention of the paper "DePT: Decoupled Prompt Tuning"☆109Updated last month
- 【ICCV 2023】Diverse Data Augmentation with Diffusions for Effective Test-time Prompt Tuning & 【IJCV 2025】Diffusion-Enhanced Test-time Adap…☆69Updated 11 months ago
- [CVPR 2023] This repository includes the official implementation our paper "Masked Autoencoders Enable Efficient Knowledge Distillers"☆108Updated 2 years ago
- Towards Unified and Effective Domain Generalization☆32Updated 2 years ago
- [WACV2025 Oral] DeepMIM: Deep Supervision for Masked Image Modeling☆55Updated 7 months ago
- [ICCV 23]An approach to enhance the efficiency of Vision Transformer (ViT) by concurrently employing token pruning and token merging tech…☆104Updated 2 years ago
- ☆30Updated last year
- SVL-Adapter: Self-Supervised Adapter for Vision-Language Pretrained Models☆21Updated last year
- [ECCV 2022] AMixer: Adaptive Weight Mixing for Self-attention Free Vision Transformers☆29Updated 3 years ago
- [CVPR 2023] Castling-ViT: Compressing Self-Attention via Switching Towards Linear-Angular Attention During Vision Transformer Inference☆29Updated last year
- Proteus (ICLR2025)☆54Updated 9 months ago
- The offical implementation of [NeurIPS2024] Wasserstein Distance Rivals Kullback-Leibler Divergence for Knowledge Distillation https://ar…☆47Updated last year
- [CVPR'24] Validation-free few-shot adaptation of CLIP, using a well-initialized Linear Probe (ZSLP) and class-adaptive constraints (CLAP)…☆79Updated 6 months ago
- [ECCV 2024] This is the official implementation of "Stitched ViTs are Flexible Vision Backbones".☆29Updated last year
- [CVPR-2024] Official implementations of CLIP-KD: An Empirical Study of CLIP Model Distillation☆136Updated 4 months ago
- [CVPR 2023] Prompt, Generate, then Cache: Cascade of Foundation Models makes Strong Few-shot Learners☆44Updated 2 years ago