KMnP / vptLinks
βοΈπ₯ Visual Prompt Tuning [ECCV 2022] https://arxiv.org/abs/2203.12119
β1,209Updated 2 years ago
Alternatives and similar repositories for vpt
Users that are interested in vpt are comparing it to the libraries listed below
Sorting:
- Prompt Learning for Vision-Language Models (IJCV'22, CVPR'22)β2,166Updated last year
- β569Updated 3 years ago
- β657Updated 2 years ago
- [CVPR 2023] Official repository of paper titled "MaPLe: Multi-modal Prompt Learning".β800Updated 2 years ago
- A curated list of prompt-based paper in computer vision and vision-language learning.β929Updated 2 years ago
- A collection of parameter-efficient transfer learning papers focusing on computer vision and multimodal domains.β410Updated last year
- Awesome list for research on CLIP (Contrastive Language-Image Pre-Training).β1,231Updated last year
- [NeurIPS 2022] Implementation of "AdaptFormer: Adapting Vision Transformers for Scalable Visual Recognition"β378Updated 3 years ago
- PyTorch implementation of MoCo v3 https//arxiv.org/abs/2104.02057β1,313Updated 4 years ago
- Low rank adaptation for Vision Transformerβ430Updated last year
- This is an official implementation for "SimMIM: A Simple Framework for Masked Image Modeling".β1,021Updated 3 years ago
- A collection of literature after or concurrent with Masked Autoencoder (MAE) (Kaiming He el al.).β858Updated last year
- Collection of awesome test-time (domain/batch/instance) adaptation methodsβ1,196Updated 2 months ago
- iBOT : Image BERT Pre-Training with Online Tokenizer (ICLR 2022)β762Updated 3 years ago
- Code for ALBEF: a new vision-language pre-training methodβ1,748Updated 3 years ago
- [CVPR 2022] DenseCLIP: Language-Guided Dense Prediction with Context-Aware Promptingβ541Updated 2 years ago
- [ICLR 2023 Spotlight] Vision Transformer Adapter for Dense Predictionsβ1,463Updated 7 months ago
- Official PyTorch implementation of GroupViT: Semantic Segmentation Emerges from Text Supervision, CVPR 2022.β780Updated 3 years ago
- This repo lists relevant papers summarized in our survey paper: A Systematic Survey of Prompt Engineering on Vision-Language Foundation β¦β510Updated 10 months ago
- A PyTorch toolbox for domain generalization, domain adaptation and semi-supervised learning.β1,412Updated 2 years ago
- Test-time Adaptation, Test-time Training and Source-free Domain Adaptationβ530Updated last year
- [CVPR 2022] Official code for "RegionCLIP: Region-based Language-Image Pretraining"β806Updated last year
- [ICCV 2021- Oral] Official PyTorch implementation for Generic Attention-model Explainability for Interpreting Bi-Modal and Encoder-Decodeβ¦β895Updated 2 years ago
- ConvMAE: Masked Convolution Meets Masked Autoencodersβ521Updated 2 years ago
- Explainability for Vision Transformersβ1,060Updated 3 years ago
- A curated list of awesome prompt/adapter learning methods for vision-language models like CLIP.β745Updated last month
- Exploring Visual Prompts for Adapting Large-Scale Modelsβ287Updated 3 years ago
- CAIRI Supervised, Semi- and Self-Supervised Visual Representation Learning Toolbox and Benchmarkβ659Updated 3 months ago
- [Pattern Recognition 25] CLIP Surgery for Better Explainability with Enhancement in Open-Vocabulary Tasksβ461Updated 10 months ago
- Learning to Prompt (L2P) for Continual Learning @ CVPR22 and DualPrompt: Complementary Prompting for Rehearsal-free Continual Learning @ β¦β473Updated last year