hjbahng / visual_prompting
Exploring Visual Prompts for Adapting Large-Scale Models
☆277Updated 2 years ago
Alternatives and similar repositories for visual_prompting:
Users that are interested in visual_prompting are comparing it to the libraries listed below
- ☆185Updated last year
- Test-time Prompt Tuning (TPT) for zero-shot generalization in vision-language models (NeurIPS 2022))☆180Updated 2 years ago
- [TPAMI] Searching prompt modules for parameter-efficient transfer learning.☆228Updated last year
- [ICCV 2023 & AAAI 2023] Binary Adapters & FacT, [Tech report] Convpass☆181Updated last year
- Code for Finetune like you pretrain: Improved finetuning of zero-shot vision models☆98Updated last year
- [ICCV 2023] Prompt-aligned Gradient for Prompt Tuning☆163Updated last year
- ☆168Updated last year
- [ICLR2023] PLOT: Prompt Learning with Optimal Transport for Vision-Language Models☆159Updated last year
- [NeurIPS2023] Official implementation and model release of the paper "What Makes Good Examples for Visual In-Context Learning?"☆173Updated last year
- [ICCV'23 Main Track, WECIA'23 Oral] Official repository of paper titled "Self-regulating Prompts: Foundational Model Adaptation without F…☆261Updated last year
- [ICLR'23 Oral] Universal Few-shot Learning of Dense Prediction Tasks with Visual Token Matching☆252Updated last year
- This repo is the official implementation of UPL (Unsupervised Prompt Learning for Vision-Language Models).☆114Updated 3 years ago
- [NeurIPS 2023] Text data, code and pre-trained models for paper "Improving CLIP Training with Language Rewrites"☆276Updated last year
- PyTorch code for the CVPR'23 paper: "CODA-Prompt: COntinual Decomposed Attention-based Prompting for Rehearsal-Free Continual Learning"☆135Updated last year
- [NeurIPS'22] This is an official implementation for "Scaling & Shifting Your Features: A New Baseline for Efficient Model Tuning".☆180Updated last year
- [ICCV 2023] Code for "Not All Features Matter: Enhancing Few-shot CLIP with Adaptive Prior Refinement"☆144Updated 11 months ago
- Official repository for "CLIP model is an Efficient Continual Learner".☆93Updated 2 years ago
- Official implementation and data release of the paper "Visual Prompting via Image Inpainting".☆309Updated last year
- ☆515Updated 2 years ago
- Code for the paper: "SuS-X: Training-Free Name-Only Transfer of Vision-Language Models" [ICCV'23]☆98Updated last year
- Code for ICLR 2023 paper (Oral) — Towards Stable Test-Time Adaptation in Dynamic Wild World☆177Updated last year
- ☆605Updated last year
- Official code for ICCV 2023 paper, "Improving Zero-Shot Generalization for CLIP with Synthesized Prompts"☆99Updated last year
- ICCV 2023: CLIPN for Zero-Shot OOD Detection: Teaching CLIP to Say No☆135Updated last year
- [ICLR 2023] The official code for our ICLR 2023 (top25%) paper: "Neural Collapse Inspired Feature-Classifier Alignment for Few-Shot Class…☆88Updated last year
- [ECCV 2022] A generalized long-tailed challenge that incorporates both the conventional class-wise imbalance and the overlooked attribute…☆126Updated 8 months ago
- CLIP Itself is a Strong Fine-tuner: Achieving 85.7% and 88.0% Top-1 Accuracy with ViT-B and ViT-L on ImageNet☆213Updated 2 years ago
- [AAAI'25, CVPRW 2024] Official repository of paper titled "Learning to Prompt with Text Only Supervision for Vision-Language Models".☆103Updated 3 months ago
- About Code Release for "CLIPood: Generalizing CLIP to Out-of-Distributions" (ICML 2023), https://arxiv.org/abs/2302.00864☆65Updated last year
- official implementation of "Interpreting CLIP's Image Representation via Text-Based Decomposition"☆203Updated 4 months ago