lhanchao777 / LVLM-Hallucinations-SurveyLinks
This is the first released survey paper on hallucinations of large vision-language models (LVLMs). To keep track of this field and continuously update our survey, we maintain this repository of relevant references.
β70Updated 10 months ago
Alternatives and similar repositories for LVLM-Hallucinations-Survey
Users that are interested in LVLM-Hallucinations-Survey are comparing it to the libraries listed below
Sorting:
- [ICML 2024] Official implementation for "HALC: Object Hallucination Reduction via Adaptive Focal-Contrast Decoding"β88Updated 5 months ago
- π curated list of awesome LMM hallucinations papers, methods & resources.β149Updated last year
- up-to-date curated list of state-of-the-art Large vision language models hallucinations research work, papers & resourcesβ129Updated 3 weeks ago
- mPLUG-HalOwl: Multimodal Hallucination Evaluation and Mitigatingβ95Updated last year
- Papers about Hallucination in Multi-Modal Large Language Models (MLLMs)β90Updated 6 months ago
- [CVPR 2024 Highlight] Mitigating Object Hallucinations in Large Vision-Language Models through Visual Contrastive Decodingβ278Updated 7 months ago
- [ICLR 2024] Analyzing and Mitigating Object Hallucination in Large Vision-Language Modelsβ146Updated last year
- The official GitHub page for ''Evaluating Object Hallucination in Large Vision-Language Models''β207Updated last year
- [EMNLP 2024 Findings] The official PyTorch implementation of EchoSight: Advancing Visual-Language Models with Wiki Knowledge.β61Updated 2 months ago
- [ICLR 2025] MLLM can see? Dynamic Correction Decoding for Hallucination Mitigationβ82Updated 5 months ago
- CHAIR metric is a rule-based metric for evaluating object hallucination in caption generation.β29Updated last year
- β46Updated 6 months ago
- [ECCV 2024] Paying More Attention to Image: A Training-Free Method for Alleviating Hallucination in LVLMsβ120Updated 6 months ago
- [EMNLP'23] The official GitHub page for ''Evaluating Object Hallucination in Large Vision-Language Models''β83Updated last year
- HallE-Control: Controlling Object Hallucination in LMMsβ31Updated last year
- MMICL, a state-of-the-art VLM with the in context learning ability from ICL, PKUβ47Updated last year
- This is the official repo for Debiasing Large Visual Language Models, including a Post-Hoc debias method and Visual Debias Decoding stratβ¦β78Updated 3 months ago
- β119Updated 3 months ago
- A Comprehensive Survey on Evaluating Reasoning Capabilities in Multimodal Large Language Models.β62Updated 2 months ago
- Beyond Hallucinations: Enhancing LVLMs through Hallucination-Aware Direct Preference Optimizationβ88Updated last year
- [CVPR 2024 Highlight] OPERA: Alleviating Hallucination in Multi-Modal Large Language Models via Over-Trust Penalty and Retrospection-Alloβ¦β341Updated 9 months ago
- the official repo for EMNLP 2024 (main) paper "EFUF: Efficient Fine-grained Unlearning Framework for Mitigating Hallucinations in Multimoβ¦β19Updated last month
- [CVPR' 25] Interleaved-Modal Chain-of-Thoughtβ43Updated last month
- Less is More: Mitigating Multimodal Hallucination from an EOS Decision Perspective (ACL 2024)β50Updated 7 months ago
- β74Updated 11 months ago
- Code for Reducing Hallucinations in Vision-Language Models via Latent Space Steeringβ57Updated 6 months ago
- This repository will continuously update the latest papers, technical reports, benchmarks about multimodal reasoning!β41Updated 2 months ago
- [ICLR'25] Official code for the paper 'MLLMs Know Where to Look: Training-free Perception of Small Visual Details with Multimodal LLMs'β203Updated last month
- Official resource for paper Investigating and Mitigating the Multimodal Hallucination Snowballing in Large Vision-Language Models (ACL 20β¦β11Updated 9 months ago
- An LLM-free Multi-dimensional Benchmark for Multi-modal Hallucination Evaluationβ120Updated last year