TianyunYoung / Hallucination-AttributionView external linksLinks
This repo contains the code for the paper "Understanding and Mitigating Hallucinations in Large Vision-Language Models via Modular Attribution and Intervention, ICLR 2025".
☆33Jul 14, 2025Updated 7 months ago
Alternatives and similar repositories for Hallucination-Attribution
Users that are interested in Hallucination-Attribution are comparing it to the libraries listed below
Sorting:
- Code for Reducing Hallucinations in Vision-Language Models via Latent Space Steering☆103Nov 23, 2024Updated last year
- Code for paper: Visual Signal Enhancement for Object Hallucination Mitigation in Multimodal Large language Models☆52Dec 18, 2024Updated last year
- [CVPR 2025] Mitigating Object Hallucinations in Large Vision-Language Models with Assembly of Global and Local Attention☆61Jul 16, 2024Updated last year
- ☆20Dec 3, 2025Updated 2 months ago
- [ICML 2025] Official implementation of paper 'Look Twice Before You Answer: Memory-Space Visual Retracing for Hallucination Mitigation in…☆171Sep 25, 2025Updated 4 months ago
- Code for Paper (Policy Optimization in RLHF: The Impact of Out-of-preference Data)☆28Dec 19, 2023Updated 2 years ago
- A library of visualization tools for the interpretability and hallucination analysis of large vision-language models (LVLMs).☆41May 22, 2025Updated 8 months ago
- Code for Blog Post: Can Better Cold-Start Strategies Improve RL Training for LLMs?☆19Mar 9, 2025Updated 11 months ago
- [ACL 2024] Mitigating Hallucinations in Large Vision-Language Models with Instruction Contrastive Decoding☆15Nov 10, 2025Updated 3 months ago
- This repo contains script to download MUSIC dataset from youtube☆12Jan 19, 2024Updated 2 years ago
- [NeurIPS 2025] More Thinking, Less Seeing? Assessing Amplified Hallucination in Multimodal Reasoning Models☆75May 31, 2025Updated 8 months ago
- [ICLR 2025] Mitigating Modality Prior-Induced Hallucinations in Multimodal Large Language Models via Deciphering Attention Causality☆60Jul 5, 2025Updated 7 months ago
- [NeurIPS 2025] Official repository for “FlowCut: Rethinking Redundancy via Information Flow for Efficient Vision-Language Models”☆28Dec 9, 2025Updated 2 months ago
- ✨✨The Curse of Multi-Modalities (CMM): Evaluating Hallucinations of Large Multimodal Models across Language, Visual, and Audio☆52Jul 11, 2025Updated 7 months ago
- Code for ICLR 2025 Paper: Visual Description Grounding Reduces Hallucinations and Boosts Reasoning in LVLMs☆22May 7, 2025Updated 9 months ago
- [ECCV2024] Reflective Instruction Tuning: Mitigating Hallucinations in Large Vision-Language Models☆20Jul 17, 2024Updated last year
- Official repository to release the code and datasets in the paper, "Article Reranking by Memory-enhanced Key Sentence Matching for Detect…☆19Dec 15, 2021Updated 4 years ago
- [ICLR '25] Official Pytorch implementation of "Interpreting and Editing Vision-Language Representations to Mitigate Hallucinations"☆95Nov 30, 2025Updated 2 months ago
- The official repo for "Where do Large Vision-Language Models Look at when Answering Questions?"☆56Jan 7, 2026Updated last month
- ☆64Jan 23, 2026Updated 3 weeks ago
- [ECCV 2024] Paying More Attention to Image: A Training-Free Method for Alleviating Hallucination in LVLMs☆163Nov 6, 2024Updated last year
- [ACL 2024] Multi-modal preference alignment remedies regression of visual instruction tuning on language model☆47Nov 10, 2024Updated last year
- [ICML 2024] Official implementation for "HALC: Object Hallucination Reduction via Adaptive Focal-Contrast Decoding"☆108Dec 4, 2024Updated last year
- [CVPR25 Highlight] A ChatGPT-Prompted Visual hallucination Evaluation Dataset, featuring over 100,000 data samples and four advanced eval…☆31Apr 16, 2025Updated 10 months ago
- [ICLR 2025] MLLM can see? Dynamic Correction Decoding for Hallucination Mitigation☆133Sep 11, 2025Updated 5 months ago
- [CVPR 2024 Highlight] Mitigating Object Hallucinations in Large Vision-Language Models through Visual Contrastive Decoding☆378Oct 7, 2024Updated last year
- [ICLR 2024 Spotlight] "Negative Label Guided OOD Detection with Pretrained Vision-Language Models"☆30Oct 23, 2024Updated last year
- [ICML'25] Our study systematically investigates massive values in LLMs' attention mechanisms. First, we observe massive values are concen…☆85Jun 20, 2025Updated 7 months ago
- [ICLR 2026] An official implementation of "SIM-CoT: Supervised Implicit Chain-of-Thought"☆165Feb 4, 2026Updated last week
- Code for Paper (ReMax: A Simple, Efficient and Effective Reinforcement Learning Method for Aligning Large Language Models)☆199Dec 16, 2023Updated 2 years ago
- VoCoT: Unleashing Visually Grounded Multi-Step Reasoning in Large Multi-Modal Models☆77Jul 13, 2024Updated last year
- Face Recognition on NVIDIA TX2☆10Sep 5, 2018Updated 7 years ago
- MiniMax-Provider-Verifier offers a rigorous, vendor-agnostic way to verify whether third-party deployments of the Minimax M2 model are co…☆23Jan 15, 2026Updated last month
- [CVPR 2025] Devils in Middle Layers of Large Vision-Language Models: Interpreting, Detecting and Mitigating Object Hallucinations via Att…☆63Oct 9, 2025Updated 4 months ago
- Code for paper: Nullu: Mitigating Object Hallucinations in Large Vision-Language Models via HalluSpace Projection☆50Mar 13, 2025Updated 11 months ago
- XL-VLMs: General Repository for eXplainable Large Vision Language Models☆46Sep 8, 2025Updated 5 months ago
- 🔥Awesome Multimodal Large Language Models Paper List☆154Mar 12, 2025Updated 11 months ago
- Vision Transformer (ViT) models, with their attention mechanisms, revolutionized computer vision. By merging Class Activation Map (CAM) a…☆13Aug 14, 2023Updated 2 years ago
- Image reconstruction from human brain activity by VAE and adversarial learning☆12May 21, 2022Updated 3 years ago