Hongcheng-Gao / HAVENLinks
Code and data for paper "Exploring Hallucination of Large Multimodal Models in Video Understanding: Benchmark, Analysis and Mitigation".
☆22Updated last week
Alternatives and similar repositories for HAVEN
Users that are interested in HAVEN are comparing it to the libraries listed below
Sorting:
- Official Repository: A Comprehensive Benchmark for Logical Reasoning in MLLMs☆43Updated 4 months ago
- ☆22Updated 5 months ago
- [NeurIPS 2025] NoisyRollout: Reinforcing Visual Reasoning with Data Augmentation☆95Updated last month
- ☆98Updated 9 months ago
- [AAAI 2025] HiRED strategically drops visual tokens in the image encoding stage to improve inference efficiency for High-Resolution Visio…☆41Updated 6 months ago
- SophiaVL-R1: Reinforcing MLLMs Reasoning with Thinking Reward☆85Updated 2 months ago
- DeepPerception: Advancing R1-like Cognitive Visual Perception in MLLMs for Knowledge-Intensive Visual Grounding☆65Updated 4 months ago
- Official implement of MIA-DPO☆66Updated 9 months ago
- Official repository of the video reasoning benchmark MMR-V. Can Your MLLMs "Think with Video"?☆36Updated 4 months ago
- MME-Unify: A Comprehensive Benchmark for Unified Multimodal Understanding and Generation Models☆41Updated 6 months ago
- Mitigating Shortcuts in Visual Reasoning with Reinforcement Learning☆40Updated 4 months ago
- [NeurIPS 2025] Unsupervised Post-Training for Multi-Modal LLM Reasoning via GRPO☆60Updated this week
- (ICLR 2025 Spotlight) Official code repository for Interleaved Scene Graph.☆29Updated 2 months ago
- 🚀 Global Compression Commander: Plug-and-Play Inference Acceleration for High-Resolution Large Vision-Language Models☆33Updated 3 months ago
- [NeurIPS 2024] Calibrated Self-Rewarding Vision Language Models☆80Updated last week
- Official implementation of "Traceable Evidence Enhanced Visual Grounded Reasoning: Evaluation and Methodology"☆68Updated 3 months ago
- WorldSense: Evaluating Real-world Omnimodal Understanding for Multimodal LLMs☆31Updated last month
- ✨✨ [ICLR 2025] MME-RealWorld: Could Your Multimodal LLM Challenge High-Resolution Real-World Scenarios that are Difficult for Humans?☆135Updated last week
- [NeurIPS'24] Official PyTorch Implementation of Seeing the Image: Prioritizing Visual Correlation by Contrastive Alignment☆57Updated last year
- Fast-Slow Thinking for Large Vision-Language Model Reasoning☆19Updated 6 months ago
- ✨✨The Curse of Multi-Modalities (CMM): Evaluating Hallucinations of Large Multimodal Models across Language, Visual, and Audio☆50Updated 3 months ago
- Repo for paper "T2Vid: Translating Long Text into Multi-Image is the Catalyst for Video-LLMs"☆48Updated 2 months ago
- [ICLR2025] MMIU: Multimodal Multi-image Understanding for Evaluating Large Vision-Language Models☆88Updated last year
- The official code of "VL-Rethinker: Incentivizing Self-Reflection of Vision-Language Models with Reinforcement Learning" [NeurIPS25]☆162Updated 4 months ago
- [NeurIPS 2025] MINT-CoT: Enabling Interleaved Visual Tokens in Mathematical Chain-of-Thought Reasoning☆86Updated last month
- [ACL 2024] Multi-modal preference alignment remedies regression of visual instruction tuning on language model☆47Updated 11 months ago
- Official repository of 'ScaleCap: Inference-Time Scalable Image Captioning via Dual-Modality Debiasing’☆57Updated 4 months ago
- [NeurIPS 2025 Spotlight] Think or Not Think: A Study of Explicit Thinking in Rule-Based Visual Reinforcement Fine-Tuning☆69Updated last month
- [NeurIPS 2025] Think or Not? Selective Reasoning via Reinforcement Learning for Vision-Language Models☆47Updated last month
- Official PyTorch implementation of RACRO (https://www.arxiv.org/abs/2506.04559)☆19Updated 4 months ago