xieyuquanxx / awesome-Large-MultiModal-HallucinationView external linksLinks
π curated list of awesome LMM hallucinations papers, methods & resources.
β150Mar 23, 2024Updated last year
Alternatives and similar repositories for awesome-Large-MultiModal-Hallucination
Users that are interested in awesome-Large-MultiModal-Hallucination are comparing it to the libraries listed below
Sorting:
- [ICML 2024] Official implementation for "HALC: Object Hallucination Reduction via Adaptive Focal-Contrast Decoding"β108Dec 4, 2024Updated last year
- The official GitHub page for ''Evaluating Object Hallucination in Large Vision-Language Models''β247Aug 21, 2025Updated 5 months ago
- [CVPR 2024 Highlight] Mitigating Object Hallucinations in Large Vision-Language Models through Visual Contrastive Decodingβ378Oct 7, 2024Updated last year
- π A curated list of resources dedicated to hallucination of multimodal large language models (MLLM).β979Sep 27, 2025Updated 4 months ago
- [ICLR 2024] Analyzing and Mitigating Object Hallucination in Large Vision-Language Modelsβ155Apr 30, 2024Updated last year
- [ACM Multimedia 2025] This is the official repo for Debiasing Large Visual Language Models, including a Post-Hoc debias method and Visualβ¦β82Feb 22, 2025Updated 11 months ago
- [ICLR'24] Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction Tuningβ296Mar 13, 2024Updated last year
- [CVPR 2024 Highlight] OPERA: Alleviating Hallucination in Multi-Modal Large Language Models via Over-Trust Penalty and Retrospection-Alloβ¦β396Aug 24, 2024Updated last year
- [CVPR25 Highlight] A ChatGPT-Prompted Visual hallucination Evaluation Dataset, featuring over 100,000 data samples and four advanced evalβ¦β31Apr 16, 2025Updated 10 months ago
- [CVPR'24] HallusionBench: You See What You Think? Or You Think What You See? An Image-Context Reasoning Benchmark Challenging for GPT-4V(β¦β325Oct 14, 2025Updated 4 months ago
- [NAACL 2024] Vision language model that reduces hallucinations through self-feedback guided revision. Visualizes attentions on image featβ¦β47Aug 21, 2024Updated last year
- List of papers on Hallucination in LMMβ10Nov 29, 2023Updated 2 years ago
- β101Dec 22, 2023Updated 2 years ago
- An LLM-free Multi-dimensional Benchmark for Multi-modal Hallucination Evaluationβ153Jan 15, 2024Updated 2 years ago
- β93Mar 29, 2019Updated 6 years ago
- HalluciDoctor: Mitigating Hallucinatory Toxicity in Visual Instruction Data (Accepted by CVPR 2024)β52Jul 16, 2024Updated last year
- Papers about Hallucination in Multi-Modal Large Language Models (MLLMs)β101Nov 21, 2024Updated last year
- HallE-Control: Controlling Object Hallucination in LMMsβ31Apr 10, 2024Updated last year
- β55Apr 1, 2024Updated last year
- up-to-date curated list of state-of-the-art Large vision language models hallucinations research work, papers & resourcesβ265Feb 8, 2026Updated last week
- [EMNLP'23] The official GitHub page for ''Evaluating Object Hallucination in Large Vision-Language Models''β106Aug 21, 2025Updated 5 months ago
- β56Nov 26, 2024Updated last year
- LLM hallucination paper listβ331Mar 11, 2024Updated last year
- Reading list of hallucination in LLMs. Check out our new survey paper: "Sirenβs Song in the AI Ocean: A Survey on Hallucination in Large β¦β1,076Sep 27, 2025Updated 4 months ago
- code for "Strengthening Multimodal Large Language Model with Bootstrapped Preference Optimization"β60Aug 23, 2024Updated last year
- Official implementation for the paper "DoLa: Decoding by Contrasting Layers Improves Factuality in Large Language Models"β537Jan 17, 2025Updated last year
- Aligning LMMs with Factually Augmented RLHFβ392Nov 1, 2023Updated 2 years ago
- [Arxiv] Aligning Modalities in Vision Large Language Models via Preference Fine-tuningβ91Apr 30, 2024Updated last year
- πΌ Official implementation of Dynamic Data Mixing Maximizes Instruction Tuning for Mixture-of-Expertsβ41Sep 29, 2024Updated last year
- [CVPR'24] RLHF-V: Towards Trustworthy MLLMs via Behavior Alignment from Fine-grained Correctional Human Feedbackβ306Sep 11, 2024Updated last year
- [ECCV 2024] Paying More Attention to Image: A Training-Free Method for Alleviating Hallucination in LVLMsβ163Nov 6, 2024Updated last year
- Harnessing 1.4M GPT4V-synthesized Data for A Lite Vision-Language Modelβ281Jun 25, 2024Updated last year
- β360Jan 27, 2024Updated 2 years ago
- [CVPR 2025] Mitigating Object Hallucinations in Large Vision-Language Models with Assembly of Global and Local Attentionβ61Jul 16, 2024Updated last year
- MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities (ICML 2024)β322Jan 20, 2025Updated last year
- mPLUG-HalOwl: Multimodal Hallucination Evaluation and Mitigatingβ97Jan 29, 2024Updated 2 years ago
- Official Repo of "MMBench: Is Your Multi-modal Model an All-around Player?"β286May 22, 2025Updated 8 months ago
- This repo contains the code for the paper "Understanding and Mitigating Hallucinations in Large Vision-Language Models via Modular Attribβ¦β33Jul 14, 2025Updated 7 months ago
- A RLHF Infrastructure for Vision-Language Modelsβ196Nov 15, 2024Updated last year