BWLONG / Awesome-Prompt-Learning-CVLinks
This repository is a collection of awesome things about vision prompts, including papers, code, etc.
β40Updated 2 years ago
Alternatives and similar repositories for Awesome-Prompt-Learning-CV
Users that are interested in Awesome-Prompt-Learning-CV are comparing it to the libraries listed below
Sorting:
- [CVPR 2024] PriViLege: Pre-trained Vision and Language Transformers Are Few-Shot Incremental Learnersβ55Updated last year
- π₯MDCS: More Diverse Experts with Consistency Self-distillation for Long-tailed Recognition [Official, ICCV 2023]β30Updated last year
- [NeurIPS2023] LoCoOp: Few-Shot Out-of-Distribution Detection via Prompt Learningβ101Updated 6 months ago
- Code for Label Propagation for Zero-shot Classification with Vision-Language Models (CVPR2024)β44Updated last year
- [CVPR2024] Simple Semantic-Aided Few-Shot Learningβ52Updated last year
- [CVPR 2024] Offical implemention of the paper "DePT: Decoupled Prompt Tuning"β109Updated last month
- Official Repository for CVPR 2024 Paper: "Large Language Models are Good Prompt Learners for Low-Shot Image Classification"β41Updated last year
- Learning Hierarchical Prompt with Structured Linguistic Knowledge for Vision-Language Models (AAAI 2024)β73Updated 11 months ago
- [ICCV'23 Main Track, WECIA'23 Oral] Official repository of paper titled "Self-regulating Prompts: Foundational Model Adaptation without Fβ¦β281Updated 2 years ago
- [ECCV 2024] Mind the Interference: Retaining Pre-trained Knowledge in Parameter Efficient Continual Learning of Vision-Language Modelsβ56Updated last year
- β55Updated last year
- Official code for ICCV 2023 paper, "Improving Zero-Shot Generalization for CLIP with Synthesized Prompts"β103Updated last year
- CVPR2024: Dual Memory Networks: A Versatile Adaptation Approach for Vision-Language Modelsβ89Updated last year
- [CVPR 2024] Official Repository for "Efficient Test-Time Adaptation of Vision-Language Models"β112Updated last year
- The official implementation of CVPR 24' Paper "Learning Transferable Negative Prompts for Out-of-Distribution Detection"β60Updated last year
- The official pytorch implemention of our CVPR-2024 paper "MMA: Multi-Modal Adapter for Vision-Language Models".β95Updated 8 months ago
- Adaptation of vision-language models (CLIP) to downstream tasks using local and global prompts.β50Updated 6 months ago
- [ECCV 2022] Offical implementation of the paper "Acknowledging the Unknown for Multi-label Learning with Single Positive Labels".β44Updated last year
- Pytorch implementation of "Test-time Adaption against Multi-modal Reliability Bias".β44Updated last year
- Source code for the paper "Long-Tail Learning with Foundation Model: Heavy Fine-Tuning Hurts" (ICML 2024)β97Updated last year
- The official code for "TextRefiner: Internal Visual Feature as Efficient Refiner for Vision-Language Models Prompt Tuning" | [AAAI2025]β48Updated 10 months ago
- Official implementation of PCS in essay "Prompt Vision Transformer for Domain Generalization"β50Updated 2 years ago
- β32Updated last year
- The official GitHub page for the survey paper "CLIP-Powered Domain Generalization and Domain Adaptation: A Comprehensive Survey". And thiβ¦β64Updated last week
- Learning without Forgetting for Vision-Language Models (TPAMI 2025)β56Updated 6 months ago
- Semi-Supervised Domain Adaptation with Source Label Adaptation, accepted to CVPR 2023β43Updated last year
- About Code Release for "CLIPood: Generalizing CLIP to Out-of-Distributions" (ICML 2023), https://arxiv.org/abs/2302.00864β69Updated 2 years ago
- [ICLR'24] Consistency-guided Prompt Learning for Vision-Language Modelsβ84Updated last year
- [ICLR 2024] Test-Time RL with CLIP Feedback for Vision-Language Models.β97Updated 3 months ago
- Pytorch implementation for "Erasing the Bias: Fine-Tuning Foundation Models for Semi-Supervised Learning" (ICML 2024)β24Updated 8 months ago