zjunlp / Deco
[ICLR 2025] MLLM can see? Dynamic Correction Decoding for Hallucination Mitigation
☆51Updated 4 months ago
Alternatives and similar repositories for Deco:
Users that are interested in Deco are comparing it to the libraries listed below
- [NeurIPS 2024] Calibrated Self-Rewarding Vision Language Models☆71Updated 10 months ago
- Less is More: Mitigating Multimodal Hallucination from an EOS Decision Perspective (ACL 2024)☆45Updated 5 months ago
- ☆71Updated 3 months ago
- This is the official repo for Debiasing Large Visual Language Models, including a Post-Hoc debias method and Visual Debias Decoding strat…☆77Updated last month
- DeepPerception: Advancing R1-like Cognitive Visual Perception in MLLMs for Knowledge-Intensive Visual Grounding☆42Updated 2 weeks ago
- [EMNLP 2024] mDPO: Conditional Preference Optimization for Multimodal Large Language Models.☆71Updated 5 months ago
- Official implementation of paper 'Look Twice Before You Answer: Memory-Space Visual Retracing for Hallucination Mitigation in Multimodal …☆46Updated last month
- Enhancing Large Vision Language Models with Self-Training on Image Comprehension.☆65Updated 10 months ago
- HalluciDoctor: Mitigating Hallucinatory Toxicity in Visual Instruction Data (Accepted by CVPR 2024)☆45Updated 8 months ago
- [EMNLP'23] The official GitHub page for ''Evaluating Object Hallucination in Large Vision-Language Models''☆82Updated last year
- The codebase for our EMNLP24 paper: Multimodal Self-Instruct: Synthetic Abstract Image and Visual Reasoning Instruction Using Language Mo…☆76Updated 2 months ago
- ☆69Updated 10 months ago
- CLIP-MoE: Mixture of Experts for CLIP☆29Updated 6 months ago
- ☆41Updated 3 months ago
- [ICLR 2025] VL-ICL Bench: The Devil in the Details of Multimodal In-Context Learning☆51Updated 2 months ago
- [NeurIPS'24] Official PyTorch Implementation of Seeing the Image: Prioritizing Visual Correlation by Contrastive Alignment☆57Updated 6 months ago
- Official implement of MIA-DPO☆54Updated 2 months ago
- Beyond Hallucinations: Enhancing LVLMs through Hallucination-Aware Direct Preference Optimization☆86Updated last year
- [CVPR 2025] Mitigating Object Hallucinations in Large Vision-Language Models with Assembly of Global and Local Attention☆28Updated 8 months ago
- ☆34Updated 9 months ago
- [ACL 2024] Multi-modal preference alignment remedies regression of visual instruction tuning on language model☆41Updated 5 months ago
- MLLM-Bench: Evaluating Multimodal LLMs with Per-sample Criteria☆69Updated 5 months ago
- Code for Reducing Hallucinations in Vision-Language Models via Latent Space Steering☆48Updated 4 months ago
- [ECCV 2024] Paying More Attention to Image: A Training-Free Method for Alleviating Hallucination in LVLMs☆112Updated 5 months ago
- ☆25Updated 11 months ago
- ✨✨The Curse of Multi-Modalities (CMM): Evaluating Hallucinations of Large Multimodal Models across Language, Visual, and Audio☆44Updated 5 months ago
- Preference Learning for LLaVA☆42Updated 5 months ago
- [ICML2024] Repo for the paper `Evaluating and Analyzing Relationship Hallucinations in Large Vision-Language Models'☆20Updated 3 months ago
- [Arxiv] Aligning Modalities in Vision Large Language Models via Preference Fine-tuning☆82Updated 11 months ago
- HallE-Control: Controlling Object Hallucination in LMMs☆30Updated last year