yuezih / less-is-moreLinks
Less is More: Mitigating Multimodal Hallucination from an EOS Decision Perspective (ACL 2024)
☆55Updated 9 months ago
Alternatives and similar repositories for less-is-more
Users that are interested in less-is-more are comparing it to the libraries listed below
Sorting:
- [EMNLP 2024] mDPO: Conditional Preference Optimization for Multimodal Large Language Models.☆80Updated 9 months ago
- [ICLR 2024] Analyzing and Mitigating Object Hallucination in Large Vision-Language Models☆148Updated last year
- [NeurIPS 2024] Calibrated Self-Rewarding Vision Language Models☆79Updated last year
- Beyond Hallucinations: Enhancing LVLMs through Hallucination-Aware Direct Preference Optimization☆91Updated last year
- [EMNLP'23] The official GitHub page for ''Evaluating Object Hallucination in Large Vision-Language Models''☆89Updated last year
- [ACM Multimedia 2025] This is the official repo for Debiasing Large Visual Language Models, including a Post-Hoc debias method and Visual…☆81Updated 6 months ago
- [ICLR 2025] MLLM can see? Dynamic Correction Decoding for Hallucination Mitigation☆96Updated 8 months ago
- HalluciDoctor: Mitigating Hallucinatory Toxicity in Visual Instruction Data (Accepted by CVPR 2024)☆48Updated last year
- ✨✨The Curse of Multi-Modalities (CMM): Evaluating Hallucinations of Large Multimodal Models across Language, Visual, and Audio☆46Updated last month
- ☆79Updated last year
- HallE-Control: Controlling Object Hallucination in LMMs☆31Updated last year
- Code for DeCo: Decoupling token compression from semanchc abstraction in multimodal large language models☆66Updated last month
- mPLUG-HalOwl: Multimodal Hallucination Evaluation and Mitigating☆96Updated last year
- [ICML 2025] Official implementation of paper 'Look Twice Before You Answer: Memory-Space Visual Retracing for Hallucination Mitigation in…☆145Updated last month
- [ICML2024] Repo for the paper `Evaluating and Analyzing Relationship Hallucinations in Large Vision-Language Models'☆21Updated 7 months ago
- VoCoT: Unleashing Visually Grounded Multi-Step Reasoning in Large Multi-Modal Models☆71Updated last year
- [ICML 2024] Official implementation for "HALC: Object Hallucination Reduction via Adaptive Focal-Contrast Decoding"☆95Updated 8 months ago
- [ECCV 2024] Paying More Attention to Image: A Training-Free Method for Alleviating Hallucination in LVLMs☆132Updated 9 months ago
- MMICL, a state-of-the-art VLM with the in context learning ability from ICL, PKU☆49Updated last month
- [Arxiv] Aligning Modalities in Vision Large Language Models via Preference Fine-tuning☆87Updated last year
- Enhancing Large Vision Language Models with Self-Training on Image Comprehension.☆70Updated last year
- [ICLR 2025] VL-ICL Bench: The Devil in the Details of Multimodal In-Context Learning☆64Updated 6 months ago
- DeepPerception: Advancing R1-like Cognitive Visual Perception in MLLMs for Knowledge-Intensive Visual Grounding☆65Updated 2 months ago
- The official GitHub page for ''Evaluating Object Hallucination in Large Vision-Language Models''☆218Updated last year
- NoisyRollout: Reinforcing Visual Reasoning with Data Augmentation☆85Updated last week
- ☆85Updated 7 months ago
- [ICLR 2025] Mitigating Modality Prior-Induced Hallucinations in Multimodal Large Language Models via Deciphering Attention Causality☆37Updated last month
- An LLM-free Multi-dimensional Benchmark for Multi-modal Hallucination Evaluation☆128Updated last year
- VideoHallucer, The first comprehensive benchmark for hallucination detection in large video-language models (LVLMs)☆36Updated 4 months ago
- 😎 curated list of awesome LMM hallucinations papers, methods & resources.☆149Updated last year