JindongGu / Awesome-Prompting-on-Vision-Language-ModelLinks
This repo lists relevant papers summarized in our survey paper: A Systematic Survey of Prompt Engineering on Vision-Language Foundation Models.
☆507Updated 9 months ago
Alternatives and similar repositories for Awesome-Prompting-on-Vision-Language-Model
Users that are interested in Awesome-Prompting-on-Vision-Language-Model are comparing it to the libraries listed below
Sorting:
- A curated list of prompt-based paper in computer vision and vision-language learning.☆928Updated 2 years ago
- [CVPR 2023] Official repository of paper titled "MaPLe: Multi-modal Prompt Learning".☆791Updated 2 years ago
- Recent LLM-based CV and related works. Welcome to comment/contribute!☆873Updated 9 months ago
- A curated list of awesome prompt/adapter learning methods for vision-language models like CLIP.☆734Updated 3 weeks ago
- ☆540Updated last year
- A collection of parameter-efficient transfer learning papers focusing on computer vision and multimodal domains.☆409Updated last year
- ❄️🔥 Visual Prompt Tuning [ECCV 2022] https://arxiv.org/abs/2203.12119☆1,195Updated 2 years ago
- A Survey on multimodal learning research.☆334Updated 2 years ago
- ☆562Updated 3 years ago
- A collection of papers on the topic of ``Computer Vision in the Wild (CVinW)''☆1,350Updated last year
- [CVPR 2024] Official PyTorch Code for "PromptKD: Unsupervised Prompt Distillation for Vision-Language Models"☆346Updated last week
- [ICCV'23 Main Track, WECIA'23 Oral] Official repository of paper titled "Self-regulating Prompts: Foundational Model Adaptation without F…☆281Updated 2 years ago
- ☆356Updated last year
- 📖 A curated list of resources dedicated to hallucination of multimodal large language models (MLLM).☆930Updated 3 months ago
- Awesome_Multimodel is a curated GitHub repository that provides a comprehensive collection of resources for Multimodal Large Language Mod…☆353Updated 9 months ago
- ☆651Updated 2 years ago
- Visualizing the attention of vision-language models☆268Updated 9 months ago
- official implementation of "Interpreting CLIP's Image Representation via Text-Based Decomposition"☆234Updated 6 months ago
- Awesome papers & datasets specifically focused on long-term videos.☆335Updated 2 months ago
- Test-time Prompt Tuning (TPT) for zero-shot generalization in vision-language models (NeurIPS 2022))☆199Updated 3 years ago
- A curated list of awesome Multimodal studies.☆301Updated last week
- Collection of Composed Image Retrieval (CIR) papers.☆289Updated last month
- Experiments and data for the paper "When and why vision-language models behave like bags-of-words, and what to do about it?" Oral @ ICLR …☆288Updated 2 years ago
- [CVPR 2024] Alpha-CLIP: A CLIP Model Focusing on Wherever You Want☆859Updated 5 months ago
- Code for paper "Boosting Continual Learning of Vision-Language Models via Mixture-of-Experts Adapters" CVPR2024☆260Updated 3 months ago
- ☆174Updated last year
- [CVPR 2024 Highlight] Mitigating Object Hallucinations in Large Vision-Language Models through Visual Contrastive Decoding☆364Updated last year
- [Neurips'24 Spotlight] Visual CoT: Advancing Multi-Modal Language Models with a Comprehensive Dataset and Benchmark for Chain-of-Thought …☆414Updated last year
- Awesome list for research on CLIP (Contrastive Language-Image Pre-Training).☆1,229Updated last year
- A curated list of papers and resources related to Described Object Detection, Open-Vocabulary/Open-World Object Detection and Referring E…☆337Updated last month