ShoufaChen / AdaptFormer
[NeurIPS 2022] Implementation of "AdaptFormer: Adapting Vision Transformers for Scalable Visual Recognition"
☆353Updated 2 years ago
Alternatives and similar repositories for AdaptFormer:
Users that are interested in AdaptFormer are comparing it to the libraries listed below
- [ICLR'23] AIM: Adapting Image Models for Efficient Video Action Recognition☆289Updated last year
- [ICCV 2023 & AAAI 2023] Binary Adapters & FacT, [Tech report] Convpass☆181Updated last year
- [CVPR 2023] This repository includes the official implementation our paper "Masked Autoencoders Enable Efficient Knowledge Distillers"☆105Updated last year
- This is a PyTorch implementation of “Context AutoEncoder for Self-Supervised Representation Learning"☆196Updated 2 years ago
- Python code for ICLR 2022 spotlight paper EViT: Expediting Vision Transformers via Token Reorganizations☆181Updated last year
- Official Codes for "Uniform Masking: Enabling MAE Pre-training for Pyramid-based Vision Transformers with Locality"☆242Updated 2 years ago
- iFormer: Inception Transformer☆245Updated 2 years ago
- [TPAMI] Searching prompt modules for parameter-efficient transfer learning.☆228Updated last year
- Exploring Visual Prompts for Adapting Large-Scale Models☆277Updated 2 years ago
- CVPR2022, BatchFormer: Learning to Explore Sample Relationships for Robust Representation Learning, https://arxiv.org/abs/2203.01522☆251Updated last year
- ConvMAE: Masked Convolution Meets Masked Autoencoders☆502Updated 2 years ago
- ☆256Updated 2 years ago
- 'NKD and USKD' (ICCV 2023) and 'ViTKD' (CVPRW 2024)☆228Updated last year
- Reading list for research topics in Masked Image Modeling☆332Updated 4 months ago
- Official code for "Top-Down Visual Attention from Analysis by Synthesis" (CVPR 2023 highlight)☆165Updated last year
- [NeurIPS'22] This is an official implementation for "Scaling & Shifting Your Features: A New Baseline for Efficient Model Tuning".☆180Updated last year
- MixMIM: Mixed and Masked Image Modeling for Efficient Visual Representation Learning☆142Updated last year
- Official PyTorch implementation of A-ViT: Adaptive Tokens for Efficient Vision Transformer (CVPR 2022)☆154Updated 2 years ago
- ☆515Updated 2 years ago
- [CVPR 2022 Oral] Crafting Better Contrastive Views for Siamese Representation Learning☆285Updated 2 years ago
- A collection of parameter-efficient transfer learning papers focusing on computer vision and multimodal domains.☆401Updated 6 months ago
- [NeurIPS 2021] [T-PAMI] DynamicViT: Efficient Vision Transformers with Dynamic Token Sparsification☆600Updated last year
- ☆85Updated last year
- ☆605Updated last year
- (AAAI 2023 Oral) Pytorch implementation of "CF-ViT: A General Coarse-to-Fine Method for Vision Transformer"☆103Updated last year
- Multimodal Prompting with Missing Modalities for Visual Recognition, CVPR'23☆200Updated last year
- Codes for ECCV2022 paper - contrastive deep supervision☆69Updated 2 years ago
- [CVPR'23] Hard Patches Mining for Masked Image Modeling☆91Updated last year
- CLIP Itself is a Strong Fine-tuner: Achieving 85.7% and 88.0% Top-1 Accuracy with ViT-B and ViT-L on ImageNet☆213Updated 2 years ago
- [ICCV'23 Main Track, WECIA'23 Oral] Official repository of paper titled "Self-regulating Prompts: Foundational Model Adaptation without F…☆261Updated last year