eric-ai-lab / PEViT
Official implementation of AAAI 2023 paper "Parameter-efficient Model Adaptation for Vision Transformers"
☆104Updated last year
Alternatives and similar repositories for PEViT:
Users that are interested in PEViT are comparing it to the libraries listed below
- [ICCV 2023 & AAAI 2023] Binary Adapters & FacT, [Tech report] Convpass☆179Updated last year
- Official implementation for CVPR'23 paper "BlackVIP: Black-Box Visual Prompting for Robust Transfer Learning"☆110Updated last year
- [CVPR 2023] This repository includes the official implementation our paper "Masked Autoencoders Enable Efficient Knowledge Distillers"☆104Updated last year
- PyTorch implementation of the paper "MILAN: Masked Image Pretraining on Language Assisted Representation" https://arxiv.org/pdf/2208.0604…☆82Updated 2 years ago
- Official implementation for paper "Knowledge Diffusion for Distillation", NeurIPS 2023☆81Updated last year
- [NeurIPS'22] This is an official implementation for "Scaling & Shifting Your Features: A New Baseline for Efficient Model Tuning".☆177Updated last year
- Official PyTorch implementation of Which Tokens to Use? Investigating Token Reduction in Vision Transformers presented at ICCV 2023 NIVT …☆35Updated last year
- [ICCV 23]An approach to enhance the efficiency of Vision Transformer (ViT) by concurrently employing token pruning and token merging tech…☆94Updated last year
- (NeurIPS 2023 spotlight) Large-scale Dataset Distillation/Condensation, 50 IPC (Images Per Class) achieves the highest 60.8% on original …☆126Updated 4 months ago
- code for "Multitask Vision-Language Prompt Tuning" https://arxiv.org/abs/2211.11720☆56Updated 9 months ago
- Official implementation of the paper "Masked Autoencoders are Efficient Class Incremental Learners"☆41Updated 9 months ago
- ☆106Updated last year
- [CVPR-2024] Official implementations of CLIP-KD: An Empirical Study of CLIP Model Distillation☆104Updated 8 months ago
- [NeurIPS'23] DropPos: Pre-Training Vision Transformers by Reconstructing Dropped Positions☆60Updated 11 months ago
- [ICCV 2023 oral] This is the official repository for our paper: ''Sensitivity-Aware Visual Parameter-Efficient Fine-Tuning''.☆66Updated last year
- Augmenting with Language-guided Image Augmentation (ALIA)☆75Updated last year
- [CVPR2024] Efficient Dataset Distillation via Minimax Diffusion☆91Updated last year
- (ICLR 2023) Official PyTorch implementation of "What Do Self-Supervised Vision Transformers Learn?"☆106Updated last year
- [TPAMI] Searching prompt modules for parameter-efficient transfer learning.☆227Updated last year
- Official repository for "CLIP model is an Efficient Continual Learner".☆92Updated 2 years ago
- [ICLR2024] Exploring Target Representations for Masked Autoencoders☆53Updated last year
- Official code of "Generating Instance-level Prompts for Rehearsal-free Continual Learning (ICCV 2023)"☆42Updated last year
- ICLR 2024, Towards Lossless Dataset Distillation via Difficulty-Aligned Trajectory Matching☆102Updated 10 months ago
- Official Pytorch implementation of "E2VPT: An Effective and Efficient Approach for Visual Prompt Tuning". (ICCV2023)☆68Updated last year
- Code for Finetune like you pretrain: Improved finetuning of zero-shot vision models☆98Updated last year
- ☆84Updated last year
- Code release for Deep Incubation (https://arxiv.org/abs/2212.04129)☆90Updated 2 years ago
- Compress conventional Vision-Language Pre-training data☆49Updated last year
- The official implementation of "2024NeurIPS Dynamic Tuning Towards Parameter and Inference Efficiency for ViT Adaptation"☆43Updated 3 months ago
- ☆58Updated 2 years ago