jiazhen-code / PhDLinks
[CVPR25 Highlight] A ChatGPT-Prompted Visual hallucination Evaluation Dataset, featuring over 100,000 data samples and four advanced evaluation modes. The dataset includes extensive contextual descriptions, counterintuitive images, and clear indicators of hallucination items.
☆31Updated 9 months ago
Alternatives and similar repositories for PhD
Users that are interested in PhD are comparing it to the libraries listed below
Sorting:
- [ICML 2024] Official implementation for "HALC: Object Hallucination Reduction via Adaptive Focal-Contrast Decoding"☆108Updated last year
- This is the first released survey paper on hallucinations of large vision-language models (LVLMs). To keep track of this field and contin…☆91Updated last year
- ☆27Updated 9 months ago
- [ICLR 2025] MLLM can see? Dynamic Correction Decoding for Hallucination Mitigation☆134Updated 4 months ago
- [CVPR 2024 Highlight] Mitigating Object Hallucinations in Large Vision-Language Models through Visual Contrastive Decoding☆375Updated last year
- [ICLR 2025] Data-Augmented Phrase-Level Alignment for Mitigating Object Hallucination☆18Updated last year
- Code for Reducing Hallucinations in Vision-Language Models via Latent Space Steering☆103Updated last year
- [CVPR 2025] Devils in Middle Layers of Large Vision-Language Models: Interpreting, Detecting and Mitigating Object Hallucinations via Att…☆62Updated 3 months ago
- [CVPR' 25] Interleaved-Modal Chain-of-Thought☆106Updated last month
- [CVPR 2025] Mitigating Object Hallucinations in Large Vision-Language Models with Assembly of Global and Local Attention☆60Updated last year
- Papers about Hallucination in Multi-Modal Large Language Models (MLLMs)☆100Updated last year
- ☆156Updated 11 months ago
- [NeurIPS 2025] More Thinking, Less Seeing? Assessing Amplified Hallucination in Multimodal Reasoning Models☆74Updated 8 months ago
- Code for paper: Nullu: Mitigating Object Hallucinations in Large Vision-Language Models via HalluSpace Projection☆49Updated 10 months ago
- ☆56Updated last year
- ☆71Updated 6 months ago
- [ECCV 2024] Paying More Attention to Image: A Training-Free Method for Alleviating Hallucination in LVLMs☆162Updated last year
- [ICML 2025] Official implementation of paper 'Look Twice Before You Answer: Memory-Space Visual Retracing for Hallucination Mitigation in…☆186Updated 4 months ago
- up-to-date curated list of state-of-the-art Large vision language models hallucinations research work, papers & resources☆262Updated 4 months ago
- Official codebase for the paper "Reasoning Within the Mind: Dynamic Multimodal Interleaving in Latent Space"☆58Updated last month
- [ICLR 2024] Analyzing and Mitigating Object Hallucination in Large Vision-Language Models☆156Updated last year
- [ICLR 2025] VL-ICL Bench: The Devil in the Details of Multimodal In-Context Learning☆69Updated 4 months ago
- Code for ICLR 2025 Paper: Visual Description Grounding Reduces Hallucinations and Boosts Reasoning in LVLMs☆22Updated 9 months ago
- [EMNLP 2024 Findings] The official PyTorch implementation of EchoSight: Advancing Visual-Language Models with Wiki Knowledge.☆78Updated 2 weeks ago
- [ICML 2024] "Visual-Text Cross Alignment: Refining the Similarity Score in Vision-Language Models"☆58Updated last year
- [NeurIPS 2024] Mitigating Object Hallucination via Concentric Causal Attention☆65Updated 5 months ago
- [CVPR 2024] Retrieval-Augmented Image Captioning with External Visual-Name Memory for Open-World Comprehension☆60Updated last year
- Less is More: Mitigating Multimodal Hallucination from an EOS Decision Perspective (ACL 2024)☆58Updated last year
- Official resource for paper Investigating and Mitigating the Multimodal Hallucination Snowballing in Large Vision-Language Models (ACL 20…☆15Updated last year
- [ICLR 2025] See What You Are Told: Visual Attention Sink in Large Multimodal Models☆89Updated 11 months ago