GaryJiajia / OFv2_ICL_VQALinks
[CVPR 2024] How to Configure Good In-Context Sequence for Visual Question Answering
☆18Updated last week
Alternatives and similar repositories for OFv2_ICL_VQA
Users that are interested in OFv2_ICL_VQA are comparing it to the libraries listed below
Sorting:
- The Code for Lever LM: Configuring In-Context Sequence to Lever Large Vision Language Models☆15Updated 8 months ago
- ☆15Updated 2 years ago
- [NeurIPS2023] Exploring Diverse In-Context Configurations for Image Captioning☆38Updated 6 months ago
- MMICL, a state-of-the-art VLM with the in context learning ability from ICL, PKU☆47Updated last year
- Official repository for the A-OKVQA dataset☆84Updated last year
- [CVPR 2024] Retrieval-Augmented Image Captioning with External Visual-Name Memory for Open-World Comprehension☆51Updated last year
- VQACL: A Novel Visual Question Answering Continual Learning Setting (CVPR'23)☆39Updated last year
- This is the first released survey paper on hallucinations of large vision-language models (LVLMs). To keep track of this field and contin…☆70Updated 10 months ago
- This is the official repository for the paper "Visually-Prompted Language Model for Fine-Grained Scene Graph Generation in an Open World"…☆47Updated last year
- [ICLR 2024] Analyzing and Mitigating Object Hallucination in Large Vision-Language Models☆146Updated last year
- HalluciDoctor: Mitigating Hallucinatory Toxicity in Visual Instruction Data (Accepted by CVPR 2024)☆45Updated 10 months ago
- [CVPR25] A ChatGPT-Prompted Visual hallucination Evaluation Dataset, featuring over 100,000 data samples and four advanced evaluation mod…☆15Updated last month
- Can I Trust Your Answer? Visually Grounded Video Question Answering (CVPR'24, Highlight)☆72Updated 11 months ago
- EMNLP2023 - InfoSeek: A New VQA Benchmark focus on Visual Info-Seeking Questions☆23Updated last year
- [ICML 2024] Official implementation for "HALC: Object Hallucination Reduction via Adaptive Focal-Contrast Decoding"☆88Updated 6 months ago
- Source code for EMNLP 2022 paper “PEVL: Position-enhanced Pre-training and Prompt Tuning for Vision-language Models”☆48Updated 2 years ago
- [NeurIPS 2023]DDCoT: Duty-Distinct Chain-of-Thought Prompting for Multimodal Reasoning in Language Models☆44Updated last year
- Official implementation of HawkEye: Training Video-Text LLMs for Grounding Text in Videos☆43Updated last year
- ☆70Updated 6 years ago
- [ACL’24 Findings] Video-Language Understanding: A Survey from Model Architecture, Model Training, and Data Perspectives☆38Updated 9 months ago
- 🔎Official code for our paper: "VL-Uncertainty: Detecting Hallucination in Large Vision-Language Model via Uncertainty Estimation".☆35Updated 2 months ago
- [CVPR' 25] Interleaved-Modal Chain-of-Thought☆45Updated last month
- Latest Advances on (RL based) Multimodal Reasoning and Generation in Multimodal Large Language Models☆25Updated 2 weeks ago
- Official implementation for the MM'22 paper.☆13Updated 2 years ago
- ☆49Updated last year
- the code for paper: A Symmetric Dual Encoding Dense Retrieval Framework for Knowledge-Intensive Visual Question Answering☆12Updated last year
- ☆12Updated last year
- Codebase for AAAI 2024 conference paper Visual Chain-of-Thought Prompting for Knowledge-based Visual Reasoning☆31Updated 2 months ago
- [EMNLP 2024] mDPO: Conditional Preference Optimization for Multimodal Large Language Models.☆74Updated 6 months ago
- [ICML 2024] "Visual-Text Cross Alignment: Refining the Similarity Score in Vision-Language Models"☆51Updated 9 months ago