lhanchao777 / LVLM-Hallucinations-Survey
This is the first released survey paper on hallucinations of large vision-language models (LVLMs). To keep track of this field and continuously update our survey, we maintain this repository of relevant references.
β68Updated 9 months ago
Alternatives and similar repositories for LVLM-Hallucinations-Survey:
Users that are interested in LVLM-Hallucinations-Survey are comparing it to the libraries listed below
- π curated list of awesome LMM hallucinations papers, methods & resources.β149Updated last year
- up-to-date curated list of state-of-the-art Large vision language models hallucinations research work, papers & resourcesβ125Updated last month
- mPLUG-HalOwl: Multimodal Hallucination Evaluation and Mitigatingβ94Updated last year
- [ICML 2024] Official implementation for "HALC: Object Hallucination Reduction via Adaptive Focal-Contrast Decoding"β86Updated 5 months ago
- Papers about Hallucination in Multi-Modal Large Language Models (MLLMs)β89Updated 5 months ago
- [EMNLP 2024 Findings] The official PyTorch implementation of EchoSight: Advancing Visual-Language Models with Wiki Knowledge.β60Updated last month
- The official GitHub page for ''Evaluating Object Hallucination in Large Vision-Language Models''β206Updated last year
- [CVPR 2024 Highlight] Mitigating Object Hallucinations in Large Vision-Language Models through Visual Contrastive Decodingβ273Updated 7 months ago
- [ICLR 2024] Analyzing and Mitigating Object Hallucination in Large Vision-Language Modelsβ146Updated last year
- [ECCV 2024] Paying More Attention to Image: A Training-Free Method for Alleviating Hallucination in LVLMsβ114Updated 6 months ago
- β47Updated 5 months ago
- [ICLR 2025] MLLM can see? Dynamic Correction Decoding for Hallucination Mitigationβ71Updated 4 months ago
- CHAIR metric is a rule-based metric for evaluating object hallucination in caption generation.β28Updated last year
- β116Updated 2 months ago
- This is the official repo for Debiasing Large Visual Language Models, including a Post-Hoc debias method and Visual Debias Decoding stratβ¦β78Updated 2 months ago
- β73Updated 11 months ago
- β9Updated last year
- HallE-Control: Controlling Object Hallucination in LMMsβ30Updated last year
- [EMNLP'23] The official GitHub page for ''Evaluating Object Hallucination in Large Vision-Language Models''β83Updated last year
- This repository contains the code for SFT, RLHF, and DPO, designed for vision-based LLMs, including the LLaVA models and the LLaMA-3.2-viβ¦β104Updated 6 months ago
- A Comprehensive Survey on Evaluating Reasoning Capabilities in Multimodal Large Language Models.β58Updated last month
- Official resource for paper Investigating and Mitigating the Multimodal Hallucination Snowballing in Large Vision-Language Models (ACL 20β¦β11Updated 8 months ago
- [CVPR25] A ChatGPT-Prompted Visual hallucination Evaluation Dataset, featuring over 100,000 data samples and four advanced evaluation modβ¦β16Updated 3 weeks ago
- MMICL, a state-of-the-art VLM with the in context learning ability from ICL, PKUβ46Updated last year
- [CVPR'24] HallusionBench: You See What You Think? Or You Think What You See? An Image-Context Reasoning Benchmark Challenging for GPT-4V(β¦β281Updated 5 months ago
- the official repo for EMNLP 2024 (main) paper "EFUF: Efficient Fine-grained Unlearning Framework for Mitigating Hallucinations in Multimoβ¦β19Updated last month
- [ICLR'25] Official code for the paper 'MLLMs Know Where to Look: Training-free Perception of Small Visual Details with Multimodal LLMs'β169Updated 2 weeks ago
- Beyond Hallucinations: Enhancing LVLMs through Hallucination-Aware Direct Preference Optimizationβ87Updated last year
- MoCLE (First MLLM with MoE for instruction customization and generalization!) (https://arxiv.org/abs/2312.12379)β37Updated last year
- [NeurIPS 2023]DDCoT: Duty-Distinct Chain-of-Thought Prompting for Multimodal Reasoning in Language Modelsβ43Updated last year