NishilBalar / Awesome-LVLM-HallucinationLinks
up-to-date curated list of state-of-the-art Large vision language models hallucinations research work, papers & resources
☆236Updated 2 months ago
Alternatives and similar repositories for Awesome-LVLM-Hallucination
Users that are interested in Awesome-LVLM-Hallucination are comparing it to the libraries listed below
Sorting:
- [CVPR 2024 Highlight] Mitigating Object Hallucinations in Large Vision-Language Models through Visual Contrastive Decoding☆364Updated last year
- Papers about Hallucination in Multi-Modal Large Language Models (MLLMs)☆98Updated last year
- [ICML 2024] Official implementation for "HALC: Object Hallucination Reduction via Adaptive Focal-Contrast Decoding"☆106Updated last year
- This is the first released survey paper on hallucinations of large vision-language models (LVLMs). To keep track of this field and contin…☆90Updated last year
- [CVPR 2024 Highlight] OPERA: Alleviating Hallucination in Multi-Modal Large Language Models via Over-Trust Penalty and Retrospection-Allo…☆390Updated last year
- The official GitHub page for ''Evaluating Object Hallucination in Large Vision-Language Models''☆238Updated 4 months ago
- [ICLR 2025] MLLM can see? Dynamic Correction Decoding for Hallucination Mitigation☆127Updated 3 months ago
- 😎 curated list of awesome LMM hallucinations papers, methods & resources.☆150Updated last year
- [CVPR'24] HallusionBench: You See What You Think? Or You Think What You See? An Image-Context Reasoning Benchmark Challenging for GPT-4V(…☆321Updated 2 months ago
- Code for Reducing Hallucinations in Vision-Language Models via Latent Space Steering☆98Updated last year
- Visualizing the attention of vision-language models☆268Updated 10 months ago
- An LLM-free Multi-dimensional Benchmark for Multi-modal Hallucination Evaluation☆146Updated last year
- [NeurIPS 2024] Mitigating Object Hallucination via Concentric Causal Attention☆66Updated 3 months ago
- ☆153Updated 10 months ago
- ☆111Updated 3 months ago
- [ECCV 2024] Paying More Attention to Image: A Training-Free Method for Alleviating Hallucination in LVLMs☆157Updated last year
- [ICML 2025] Official implementation of paper 'Look Twice Before You Answer: Memory-Space Visual Retracing for Hallucination Mitigation in…☆178Updated 3 months ago
- [EMNLP 2024 Findings] The official PyTorch implementation of EchoSight: Advancing Visual-Language Models with Wiki Knowledge.☆75Updated 6 months ago
- ☆55Updated last year
- [ICLR 2024] Analyzing and Mitigating Object Hallucination in Large Vision-Language Models☆155Updated last year
- A RLHF Infrastructure for Vision-Language Models☆189Updated last year
- [ICLR'25] Official code for the paper 'MLLMs Know Where to Look: Training-free Perception of Small Visual Details with Multimodal LLMs'☆310Updated 8 months ago
- Latest Advances on Modality Priors in Multimodal Large Language Models☆29Updated 2 weeks ago
- [NeurIPS 2025] More Thinking, Less Seeing? Assessing Amplified Hallucination in Multimodal Reasoning Models☆73Updated 6 months ago
- ☆66Updated 4 months ago
- [ICML 2024 Oral] Official code repository for MLLM-as-a-Judge.☆86Updated 10 months ago
- A Survey on Benchmarks of Multimodal Large Language Models☆145Updated 5 months ago
- mPLUG-HalOwl: Multimodal Hallucination Evaluation and Mitigating☆98Updated last year
- Less is More: Mitigating Multimodal Hallucination from an EOS Decision Perspective (ACL 2024)☆56Updated last year
- A curated collection of resources focused on the Mechanistic Interpretability (MI) of Large Multimodal Models (LMMs). This repository agg…☆173Updated 2 months ago