guoyang9 / PELALinks
PELA: Learning Parameter-Efficient Models with Low-Rank Approximation [CVPR 2024]
☆19Updated last year
Alternatives and similar repositories for PELA
Users that are interested in PELA are comparing it to the libraries listed below
Sorting:
- ☆47Updated 2 years ago
- [ICCV 2023 oral] This is the official repository for our paper: ''Sensitivity-Aware Visual Parameter-Efficient Fine-Tuning''.☆74Updated 2 years ago
- Official implementation for paper "Knowledge Diffusion for Distillation", NeurIPS 2023☆92Updated last year
- Official PyTorch implementation of Which Tokens to Use? Investigating Token Reduction in Vision Transformers presented at ICCV 2023 NIVT …☆34Updated 2 years ago
- [CVPR-22] This is the official implementation of the paper "Adavit: Adaptive vision transformers for efficient image recognition".☆55Updated 3 years ago
- Adapters Strike Back (CVPR 2024)☆38Updated last year
- [CVPR 2023] Castling-ViT: Compressing Self-Attention via Switching Towards Linear-Angular Attention During Vision Transformer Inference☆30Updated last year
- [CVPR 2023] This repository includes the official implementation our paper "Masked Autoencoders Enable Efficient Knowledge Distillers"☆108Updated 2 years ago
- ☆34Updated 2 years ago
- The codebase for paper "PPT: Token Pruning and Pooling for Efficient Vision Transformer"☆27Updated last year
- ☆18Updated last year
- [ICCV 23]An approach to enhance the efficiency of Vision Transformer (ViT) by concurrently employing token pruning and token merging tech…☆101Updated 2 years ago
- Official implement of Evo-ViT: Slow-Fast Token Evolution for Dynamic Vision Transformer☆73Updated 3 years ago
- ☆30Updated last year
- 🔥 🔥 [WACV2024] Mini but Mighty: Finetuning ViTs with Mini Adapters☆19Updated last year
- [BMVC 2022] Information Theoretic Representation Distillation☆18Updated 2 years ago
- [CVPR 2024] VkD : Improving Knowledge Distillation using Orthogonal Projections☆56Updated last year
- [WACV2025 Oral] DeepMIM: Deep Supervision for Masked Image Modeling☆55Updated 6 months ago
- [ICCV 2021] Official implementation of "Scalable Vision Transformers with Hierarchical Pooling"☆33Updated 3 years ago
- Official PyTorch Code for "Is Synthetic Data From Diffusion Models Ready for Knowledge Distillation?" (https://arxiv.org/abs/2305.12954)☆49Updated last year
- MADAv2: Advanced Multi-Anchor Based Active Domain Adaptation Segmentation☆25Updated 2 years ago
- [NeurIPS 2022] “M³ViT: Mixture-of-Experts Vision Transformer for Efficient Multi-task Learning with Model-Accelerator Co-design”, Hanxue …☆132Updated 2 years ago
- The official project website of "NORM: Knowledge Distillation via N-to-One Representation Matching" (The paper of NORM is published in IC…☆20Updated 2 years ago
- Adapting LLaMA Decoder to Vision Transformer☆30Updated last year
- CVPR2024: Dual Memory Networks: A Versatile Adaptation Approach for Vision-Language Models☆86Updated last year
- [ICCV 2023 & AAAI 2023] Binary Adapters & FacT, [Tech report] Convpass☆195Updated 2 years ago
- [ICCV 2025] EA-ViT: Efficient Adaptation for Elastic Vision Transformer☆23Updated 3 months ago
- CVPR2024 highlight.☆13Updated last year
- [CVPR2024] The code of "UniPT: Universal Parallel Tuning for Transfer Learning with Efficient Parameter and Memory"☆67Updated last year
- [NeurIPS'22] This is an official implementation for "Scaling & Shifting Your Features: A New Baseline for Efficient Model Tuning".☆187Updated 2 years ago