starreeze / efufLinks
the official repo for EMNLP 2024 (main) paper "EFUF: Efficient Fine-grained Unlearning Framework for Mitigating Hallucinations in Multimodal Large Language Models"
☆20Updated 10 months ago
Alternatives and similar repositories for efuf
Users that are interested in efuf are comparing it to the libraries listed below
Sorting:
- Less is More: Mitigating Multimodal Hallucination from an EOS Decision Perspective (ACL 2024)☆57Updated last year
- [ICLR 2025] MLLM can see? Dynamic Correction Decoding for Hallucination Mitigation☆133Updated 5 months ago
- [ICML 2024] Official implementation for "HALC: Object Hallucination Reduction via Adaptive Focal-Contrast Decoding"☆107Updated last year
- Code for Reducing Hallucinations in Vision-Language Models via Latent Space Steering☆103Updated last year
- [ICML 2025] Official implementation of paper 'Look Twice Before You Answer: Memory-Space Visual Retracing for Hallucination Mitigation in…☆171Updated 4 months ago
- [NeurIPS 2025] More Thinking, Less Seeing? Assessing Amplified Hallucination in Multimodal Reasoning Models☆75Updated 8 months ago
- 😎 curated list of awesome LMM hallucinations papers, methods & resources.☆150Updated last year
- This repo contains the code for the paper "Understanding and Mitigating Hallucinations in Large Vision-Language Models via Modular Attrib…☆33Updated 6 months ago
- [EMNLP 2024] mDPO: Conditional Preference Optimization for Multimodal Large Language Models.☆85Updated last year
- mPLUG-HalOwl: Multimodal Hallucination Evaluation and Mitigating☆97Updated 2 years ago
- ☆56Updated last year
- [ECCV 2024] Paying More Attention to Image: A Training-Free Method for Alleviating Hallucination in LVLMs☆162Updated last year
- Beyond Hallucinations: Enhancing LVLMs through Hallucination-Aware Direct Preference Optimization☆100Updated 2 years ago
- HalluciDoctor: Mitigating Hallucinatory Toxicity in Visual Instruction Data (Accepted by CVPR 2024)☆52Updated last year
- [EMNLP'23] The official GitHub page for ''Evaluating Object Hallucination in Large Vision-Language Models''☆105Updated 5 months ago
- This repository will continuously update the latest papers, technical reports, benchmarks about multimodal reasoning!☆54Updated 10 months ago
- [ACL 2025 Findings] Official pytorch implementation of "Don't Miss the Forest for the Trees: Attentional Vision Calibration for Large Vis…☆24Updated last year
- HallE-Control: Controlling Object Hallucination in LMMs☆31Updated last year
- ☆88Updated last year
- [ICLR 2024] Analyzing and Mitigating Object Hallucination in Large Vision-Language Models☆155Updated last year
- [NAACL 2025 Main] Official Implementation of MLLMU-Bench☆48Updated 11 months ago
- The reinforcement learning codes for dataset SPA-VL☆44Updated last year
- An LLM-free Multi-dimensional Benchmark for Multi-modal Hallucination Evaluation☆153Updated 2 years ago
- The official GitHub page for ''Evaluating Object Hallucination in Large Vision-Language Models''☆245Updated 5 months ago
- ECSO (Make MLLM safe without neither training nor any external models!) (https://arxiv.org/abs/2403.09572)☆36Updated last year
- A Comprehensive Survey on Evaluating Reasoning Capabilities in Multimodal Large Language Models.☆71Updated 10 months ago
- [ACL 2024] Logical Closed Loop: Uncovering Object Hallucinations in Large Vision-Language Models. Detect and mitigate object hallucinatio…☆25Updated last year
- [ICCV 2025] The official code of the paper "Deciphering Cross-Modal Alignment in Large Vision-Language Models with Modality Integration R…☆111Updated 7 months ago
- [ICML 2024 Oral] Official code repository for MLLM-as-a-Judge.☆89Updated 11 months ago
- Code for ICLR 2025 Paper: Visual Description Grounding Reduces Hallucinations and Boosts Reasoning in LVLMs☆22Updated 9 months ago