BWLONG / Awesome-Prompt-Learning-CVLinks
This repository is a collection of awesome things about vision prompts, including papers, code, etc.
☆34Updated last year
Alternatives and similar repositories for Awesome-Prompt-Learning-CV
Users that are interested in Awesome-Prompt-Learning-CV are comparing it to the libraries listed below
Sorting:
- ☆24Updated last year
- CVPR2024: Dual Memory Networks: A Versatile Adaptation Approach for Vision-Language Models☆73Updated 11 months ago
- (CVPR2024 Highlight) Novel Class Discovery for Ultra-Fine-Grained Visual Categorization (UFG-NCD)☆19Updated 11 months ago
- ☆36Updated 2 weeks ago
- ☆47Updated last year
- The official GitHub page for the survey paper "CLIP-Powered Domain Generalization and Domain Adaptation: A Comprehensive Survey". And thi…☆40Updated 2 weeks ago
- 🔥MDCS: More Diverse Experts with Consistency Self-distillation for Long-tailed Recognition [Official, ICCV 2023]☆30Updated 8 months ago
- Pytorch implementation of "Test-time Adaption against Multi-modal Reliability Bias".☆35Updated 6 months ago
- ☆15Updated last year
- ☆42Updated 2 months ago
- The official pytorch implemention of our CVPR-2024 paper "MMA: Multi-Modal Adapter for Vision-Language Models".☆68Updated 2 months ago
- [CVPR 2024] Official Repository for "Efficient Test-Time Adaptation of Vision-Language Models"☆98Updated 11 months ago
- [ICLR'24] Consistency-guided Prompt Learning for Vision-Language Models☆75Updated last year
- ☆36Updated last year
- Code and Dataset for the paper "LAMM: Label Alignment for Multi-Modal Prompt Learning" AAAI 2024☆32Updated last year
- Adaptation of vision-language models (CLIP) to downstream tasks using local and global prompts.☆45Updated 8 months ago
- [CVPR 2024] PriViLege: Pre-trained Vision and Language Transformers Are Few-Shot Incremental Learners☆51Updated 9 months ago
- The official code for "TextRefiner: Internal Visual Feature as Efficient Refiner for Vision-Language Models Prompt Tuning" | [AAAI2025]☆41Updated 3 months ago
- Learning without Forgetting for Vision-Language Models (TPAMI 2025)☆39Updated 4 months ago
- About Code Release for "CLIPood: Generalizing CLIP to Out-of-Distributions" (ICML 2023), https://arxiv.org/abs/2302.00864☆66Updated last year
- Learning Hierarchical Prompt with Structured Linguistic Knowledge for Vision-Language Models (AAAI 2024)☆74Updated 4 months ago
- Official code for ICCV 2023 paper, "Improving Zero-Shot Generalization for CLIP with Synthesized Prompts"☆101Updated last year
- Official Repository for CVPR 2024 Paper: "Large Language Models are Good Prompt Learners for Low-Shot Image Classification"☆36Updated 11 months ago
- [NeurIPS2023] LoCoOp: Few-Shot Out-of-Distribution Detection via Prompt Learning☆96Updated 10 months ago
- The Pytorch implementation of Domain-Agnostic Mutual Prompting for Unsupervised Domain Adaptation☆32Updated last year
- [CVPR 2024] Offical implemention of the paper "DePT: Decoupled Prompt Tuning"☆104Updated 3 weeks ago
- [NeurIPS 2023] Meta-Adapter☆49Updated last year
- Pytorch implementation for "Erasing the Bias: Fine-Tuning Foundation Models for Semi-Supervised Learning" (ICML 2024)☆20Updated last month
- This repository lists some awesome public projects about Zero-shot/Few-shot Learning based on CLIP (Contrastive Language-Image Pre-Traini…☆24Updated 6 months ago
- [ICCV 2023 Oral] IOMatch: Simplifying Open-Set Semi-Supervised Learning with Joint Inliers and Outliers Utilization☆48Updated last year