JindongGu / Awesome-Prompting-on-Vision-Language-ModelLinks
This repo lists relevant papers summarized in our survey paper: A Systematic Survey of Prompt Engineering on Vision-Language Foundation Models.
☆491Updated 6 months ago
Alternatives and similar repositories for Awesome-Prompting-on-Vision-Language-Model
Users that are interested in Awesome-Prompting-on-Vision-Language-Model are comparing it to the libraries listed below
Sorting:
- A curated list of prompt-based paper in computer vision and vision-language learning.☆925Updated last year
- [CVPR 2023] Official repository of paper titled "MaPLe: Multi-modal Prompt Learning".☆777Updated 2 years ago
- Recent LLM-based CV and related works. Welcome to comment/contribute!☆873Updated 6 months ago
- A curated list of awesome prompt/adapter learning methods for vision-language models like CLIP.☆679Updated 3 weeks ago
- ☆529Updated 10 months ago
- A Survey on multimodal learning research.☆331Updated 2 years ago
- ☆354Updated last year
- A collection of parameter-efficient transfer learning papers focusing on computer vision and multimodal domains.☆407Updated last year
- A curated list of awesome Multimodal studies.☆277Updated 2 months ago
- [CVPR 2024] Official PyTorch Code for "PromptKD: Unsupervised Prompt Distillation for Vision-Language Models"☆330Updated last month
- A collection of papers on the topic of ``Computer Vision in the Wild (CVinW)''☆1,337Updated last year
- ☆547Updated 3 years ago
- 📖 A curated list of resources dedicated to hallucination of multimodal large language models (MLLM).☆859Updated last week
- official implementation of "Interpreting CLIP's Image Representation via Text-Based Decomposition"☆223Updated 4 months ago
- [CVPR 2024] Alpha-CLIP: A CLIP Model Focusing on Wherever You Want☆844Updated 2 months ago
- [CVPR 2024 🔥] Grounding Large Multimodal Model (GLaMM), the first-of-its-kind model capable of generating natural language responses tha…☆918Updated 2 months ago
- [ICCV'23 Main Track, WECIA'23 Oral] Official repository of paper titled "Self-regulating Prompts: Foundational Model Adaptation without F…☆277Updated 2 years ago
- Collection of Composed Image Retrieval (CIR) papers.☆266Updated last month
- Visualizing the attention of vision-language models☆239Updated 7 months ago
- Awesome_Multimodel is a curated GitHub repository that provides a comprehensive collection of resources for Multimodal Large Language Mod…☆342Updated 6 months ago
- A curated list of papers and resources related to Described Object Detection, Open-Vocabulary/Open-World Object Detection and Referring E…☆313Updated 2 months ago
- Awesome papers & datasets specifically focused on long-term videos.☆314Updated 2 months ago
- ☆638Updated last year
- [Neurips'24 Spotlight] Visual CoT: Advancing Multi-Modal Language Models with a Comprehensive Dataset and Benchmark for Chain-of-Thought …☆377Updated 9 months ago
- [CVPR 2024 Highlight] Mitigating Object Hallucinations in Large Vision-Language Models through Visual Contrastive Decoding☆319Updated last year
- ❄️🔥 Visual Prompt Tuning [ECCV 2022] https://arxiv.org/abs/2203.12119☆1,164Updated 2 years ago
- [CVPR2024] ViP-LLaVA: Making Large Multimodal Models Understand Arbitrary Visual Prompts☆331Updated last year
- Test-time Prompt Tuning (TPT) for zero-shot generalization in vision-language models (NeurIPS 2022))☆198Updated 2 years ago
- up-to-date curated list of state-of-the-art Large vision language models hallucinations research work, papers & resources☆179Updated 2 months ago
- ☆174Updated last year