lhanchao777 / LVLM-Hallucinations-SurveyLinks
This is the first released survey paper on hallucinations of large vision-language models (LVLMs). To keep track of this field and continuously update our survey, we maintain this repository of relevant references.
☆73Updated last year
Alternatives and similar repositories for LVLM-Hallucinations-Survey
Users that are interested in LVLM-Hallucinations-Survey are comparing it to the libraries listed below
Sorting:
- [CVPR 2024 Highlight] Mitigating Object Hallucinations in Large Vision-Language Models through Visual Contrastive Decoding☆301Updated 9 months ago
- up-to-date curated list of state-of-the-art Large vision language models hallucinations research work, papers & resources☆150Updated last week
- [ICML 2024] Official implementation for "HALC: Object Hallucination Reduction via Adaptive Focal-Contrast Decoding"☆92Updated 8 months ago
- Papers about Hallucination in Multi-Modal Large Language Models (MLLMs)☆94Updated 8 months ago
- The official GitHub page for ''Evaluating Object Hallucination in Large Vision-Language Models''☆216Updated last year
- [EMNLP 2024 Findings] The official PyTorch implementation of EchoSight: Advancing Visual-Language Models with Wiki Knowledge.☆72Updated last month
- 😎 curated list of awesome LMM hallucinations papers, methods & resources.☆149Updated last year
- [CVPR 2024 Highlight] OPERA: Alleviating Hallucination in Multi-Modal Large Language Models via Over-Trust Penalty and Retrospection-Allo…☆354Updated 11 months ago
- mPLUG-HalOwl: Multimodal Hallucination Evaluation and Mitigating☆95Updated last year
- ☆132Updated 5 months ago
- [CVPR25 Highlight] A ChatGPT-Prompted Visual hallucination Evaluation Dataset, featuring over 100,000 data samples and four advanced eval…☆18Updated 3 months ago
- [ICML 2025] Official implementation of paper 'Look Twice Before You Answer: Memory-Space Visual Retracing for Hallucination Mitigation in…☆145Updated 3 weeks ago
- [ICLR 2025] MLLM can see? Dynamic Correction Decoding for Hallucination Mitigation☆93Updated 7 months ago
- ☆48Updated 8 months ago
- [ECCV 2024] Paying More Attention to Image: A Training-Free Method for Alleviating Hallucination in LVLMs☆132Updated 8 months ago
- [ICLR 2024] Analyzing and Mitigating Object Hallucination in Large Vision-Language Models☆147Updated last year
- [CVPR'24] HallusionBench: You See What You Think? Or You Think What You See? An Image-Context Reasoning Benchmark Challenging for GPT-4V(…☆293Updated 8 months ago
- Code for ICLR 2025 Paper: Visual Description Grounding Reduces Hallucinations and Boosts Reasoning in LVLMs☆17Updated 2 months ago
- ☆103Updated 3 weeks ago
- Official resource for paper Investigating and Mitigating the Multimodal Hallucination Snowballing in Large Vision-Language Models (ACL 20…☆12Updated 11 months ago
- HallE-Control: Controlling Object Hallucination in LMMs☆31Updated last year
- [ICLR 2025] VL-ICL Bench: The Devil in the Details of Multimodal In-Context Learning☆62Updated 5 months ago
- An LLM-free Multi-dimensional Benchmark for Multi-modal Hallucination Evaluation☆128Updated last year
- [CVPR 2024] Official Code for the Paper "Compositional Chain-of-Thought Prompting for Large Multimodal Models"☆134Updated last year
- [NeurIPS 2024] Mitigating Object Hallucination via Concentric Causal Attention☆58Updated 7 months ago
- A Comprehensive Survey on Evaluating Reasoning Capabilities in Multimodal Large Language Models.☆68Updated 4 months ago
- [ACM Multimedia 2025] This is the official repo for Debiasing Large Visual Language Models, including a Post-Hoc debias method and Visual…☆81Updated 5 months ago
- ☆95Updated 4 months ago
- [CVPR' 25] Interleaved-Modal Chain-of-Thought☆68Updated 3 months ago
- [ICLR'24] Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction Tuning☆284Updated last year