GaryJiajia / OFv2_ICL_VQALinks
[CVPR 2024] How to Configure Good In-Context Sequence for Visual Question Answering
☆21Updated 4 months ago
Alternatives and similar repositories for OFv2_ICL_VQA
Users that are interested in OFv2_ICL_VQA are comparing it to the libraries listed below
Sorting:
- [ICLR 2024] Analyzing and Mitigating Object Hallucination in Large Vision-Language Models☆149Updated last year
- The Code for Lever LM: Configuring In-Context Sequence to Lever Large Vision Language Models☆16Updated last year
- This is the first released survey paper on hallucinations of large vision-language models (LVLMs). To keep track of this field and contin…☆78Updated last year
- [CVPR25 Highlight] A ChatGPT-Prompted Visual hallucination Evaluation Dataset, featuring over 100,000 data samples and four advanced eval…☆23Updated 5 months ago
- VQACL: A Novel Visual Question Answering Continual Learning Setting (CVPR'23)☆41Updated last year
- [ICML 2024] Official implementation for "HALC: Object Hallucination Reduction via Adaptive Focal-Contrast Decoding"☆99Updated 10 months ago
- MMICL, a state-of-the-art VLM with the in context learning ability from ICL, PKU☆50Updated 2 months ago
- mPLUG-HalOwl: Multimodal Hallucination Evaluation and Mitigating☆98Updated last year
- [NeurIPS2023] Exploring Diverse In-Context Configurations for Image Captioning☆41Updated 10 months ago
- ☆17Updated 2 years ago
- ☆64Updated last year
- Official repository for the A-OKVQA dataset☆99Updated last year
- [EMNLP 2024 Findings] The official PyTorch implementation of EchoSight: Advancing Visual-Language Models with Wiki Knowledge.☆75Updated 3 months ago
- Less is More: Mitigating Multimodal Hallucination from an EOS Decision Perspective (ACL 2024)☆55Updated 11 months ago
- The official GitHub page for ''Evaluating Object Hallucination in Large Vision-Language Models''☆224Updated last month
- ☆82Updated last year
- Official resource for paper Investigating and Mitigating the Multimodal Hallucination Snowballing in Large Vision-Language Models (ACL 20…☆12Updated last year
- [NeurIPS 2023]DDCoT: Duty-Distinct Chain-of-Thought Prompting for Multimodal Reasoning in Language Models☆46Updated last year
- [CVPR 2024] Retrieval-Augmented Image Captioning with External Visual-Name Memory for Open-World Comprehension☆55Updated last year
- EMNLP2023 - InfoSeek: A New VQA Benchmark focus on Visual Info-Seeking Questions☆25Updated last year
- [ACM MM 2023] The released code of paper "Deconfounded Visual Question Generation with Causal Inference"☆10Updated last year
- [CVPR 2024] Official Code for the Paper "Compositional Chain-of-Thought Prompting for Large Multimodal Models"☆136Updated last year
- [CVPR 2024 Highlight] Mitigating Object Hallucinations in Large Vision-Language Models through Visual Contrastive Decoding☆320Updated last year
- Colorful Prompt Tuning for Pre-trained Vision-Language Models☆49Updated 2 years ago
- ☆20Updated 2 years ago
- [ICLR 2025] MLLM can see? Dynamic Correction Decoding for Hallucination Mitigation☆115Updated last month
- HalluciDoctor: Mitigating Hallucinatory Toxicity in Visual Instruction Data (Accepted by CVPR 2024)☆48Updated last year
- [EMNLP'23] The official GitHub page for ''Evaluating Object Hallucination in Large Vision-Language Models''☆93Updated last month
- 【ICLR 2024, Spotlight】Sentence-level Prompts Benefit Composed Image Retrieval☆87Updated last year
- [ICML 2024] "Visual-Text Cross Alignment: Refining the Similarity Score in Vision-Language Models"☆56Updated last year