JindongGu / Awesome-Prompting-on-Vision-Language-ModelLinks
This repo lists relevant papers summarized in our survey paper: A Systematic Survey of Prompt Engineering on Vision-Language Foundation Models.
☆508Updated 9 months ago
Alternatives and similar repositories for Awesome-Prompting-on-Vision-Language-Model
Users that are interested in Awesome-Prompting-on-Vision-Language-Model are comparing it to the libraries listed below
Sorting:
- A curated list of awesome prompt/adapter learning methods for vision-language models like CLIP.☆739Updated last month
- A curated list of prompt-based paper in computer vision and vision-language learning.☆928Updated 2 years ago
- Recent LLM-based CV and related works. Welcome to comment/contribute!☆875Updated 10 months ago
- ☆544Updated last year
- [CVPR 2023] Official repository of paper titled "MaPLe: Multi-modal Prompt Learning".☆795Updated 2 years ago
- A collection of parameter-efficient transfer learning papers focusing on computer vision and multimodal domains.☆410Updated last year
- ☆356Updated last year
- 📖 A curated list of resources dedicated to hallucination of multimodal large language models (MLLM).☆954Updated 3 months ago
- Awesome papers & datasets specifically focused on long-term videos.☆339Updated 3 months ago
- A collection of papers on the topic of ``Computer Vision in the Wild (CVinW)''☆1,350Updated last year
- Collection of Composed Image Retrieval (CIR) papers.☆300Updated 3 weeks ago
- A curated list of awesome Multimodal studies.☆310Updated last month
- A Survey on multimodal learning research.☆334Updated 2 years ago
- [CVPR 2024] Official PyTorch Code for "PromptKD: Unsupervised Prompt Distillation for Vision-Language Models"☆344Updated last month
- Visualizing the attention of vision-language models☆273Updated 10 months ago
- Awesome_Multimodel is a curated GitHub repository that provides a comprehensive collection of resources for Multimodal Large Language Mod…☆357Updated 9 months ago
- official implementation of "Interpreting CLIP's Image Representation via Text-Based Decomposition"☆233Updated 7 months ago
- ☆568Updated 3 years ago
- [ICCV'23 Main Track, WECIA'23 Oral] Official repository of paper titled "Self-regulating Prompts: Foundational Model Adaptation without F…☆281Updated 2 years ago
- Test-time Prompt Tuning (TPT) for zero-shot generalization in vision-language models (NeurIPS 2022))☆204Updated 3 years ago
- [CVPR 2024] Alpha-CLIP: A CLIP Model Focusing on Wherever You Want☆861Updated 5 months ago
- [CVPR 2024 Highlight] Mitigating Object Hallucinations in Large Vision-Language Models through Visual Contrastive Decoding☆368Updated last year
- [Neurips'24 Spotlight] Visual CoT: Advancing Multi-Modal Language Models with a Comprehensive Dataset and Benchmark for Chain-of-Thought …☆416Updated last year
- ❄️🔥 Visual Prompt Tuning [ECCV 2022] https://arxiv.org/abs/2203.12119☆1,202Updated 2 years ago
- [Pattern Recognition 25] CLIP Surgery for Better Explainability with Enhancement in Open-Vocabulary Tasks☆459Updated 10 months ago
- ☆655Updated 2 years ago
- up-to-date curated list of state-of-the-art Large vision language models hallucinations research work, papers & resources☆245Updated 3 months ago
- A most Frontend Collection and survey of vision-language model papers, and models GitHub repository. Continuous updates.☆500Updated last week
- ☆175Updated 2 years ago
- Chatbot Arena meets multi-modality! Multi-Modality Arena allows you to benchmark vision-language models side-by-side while providing imag…☆555Updated last year