JindongGu / Awesome-Prompting-on-Vision-Language-Model
This repo lists relevant papers summarized in our survey paper: A Systematic Survey of Prompt Engineering on Vision-Language Foundation Models.
β405Updated last month
Alternatives and similar repositories for Awesome-Prompting-on-Vision-Language-Model:
Users that are interested in Awesome-Prompting-on-Vision-Language-Model are comparing it to the libraries listed below
- Recent LLM-based CV and related works. Welcome to comment/contribute!β847Updated 6 months ago
- β472Updated last month
- π A curated list of resources dedicated to hallucination of multimodal large language models (MLLM).β499Updated last week
- A curated list of prompt-based paper in computer vision and vision-language learning.β904Updated 11 months ago
- A Survey on multimodal learning research.β320Updated last year
- β296Updated 10 months ago
- A curated list of awesome prompt/adapter learning methods for vision-language models like CLIP.β337Updated this week
- [CVPR 2023] Official repository of paper titled "MaPLe: Multi-modal Prompt Learning".β684Updated last year
- A collection of parameter-efficient transfer learning papers focusing on computer vision and multimodal domains.β393Updated 2 months ago
- Awesome papers & datasets specifically focused on long-term videos.