OpenGVLab / MMIULinks
[ICLR2025] MMIU: Multimodal Multi-image Understanding for Evaluating Large Vision-Language Models
☆88Updated last year
Alternatives and similar repositories for MMIU
Users that are interested in MMIU are comparing it to the libraries listed below
Sorting:
- Official implement of MIA-DPO☆67Updated 9 months ago
- ☆76Updated 4 months ago
- SophiaVL-R1: Reinforcing MLLMs Reasoning with Thinking Reward☆86Updated 3 months ago
- Official implementation of "Traceable Evidence Enhanced Visual Grounded Reasoning: Evaluation and Methodology"☆70Updated last week
- [TMLR] Public code repo for paper "A Single Transformer for Scalable Vision-Language Modeling"☆148Updated last year
- The codebase for our EMNLP24 paper: Multimodal Self-Instruct: Synthetic Abstract Image and Visual Reasoning Instruction Using Language Mo…☆83Updated 9 months ago
- [ICLR 2025] AuroraCap: Efficient, Performant Video Detailed Captioning and a New Benchmark☆130Updated 5 months ago
- [NeurIPS'24] Official PyTorch Implementation of Seeing the Image: Prioritizing Visual Correlation by Contrastive Alignment☆57Updated last year
- ☆79Updated last year
- Preference Learning for LLaVA☆54Updated last year
- ☆98Updated 10 months ago
- (ICLR 2025 Spotlight) Official code repository for Interleaved Scene Graph.☆31Updated 3 months ago
- Video-Holmes: Can MLLM Think Like Holmes for Complex Video Reasoning?☆76Updated 4 months ago
- [ACL 2024] Multi-modal preference alignment remedies regression of visual instruction tuning on language model☆48Updated last year
- [NeurIPS-24] This is the official implementation of the paper "DeepStack: Deeply Stacking Visual Tokens is Surprisingly Simple and Effect…☆66Updated last year
- code for "Strengthening Multimodal Large Language Model with Bootstrapped Preference Optimization"☆59Updated last year
- [NeurIPS 2024] Calibrated Self-Rewarding Vision Language Models☆80Updated 2 weeks ago
- Uni-CoT: Towards Unified Chain-of-Thought Reasoning Across Text and Vision☆166Updated last month
- [COLM'25] Official implementation of the Law of Vision Representation in MLLMs☆168Updated last month
- Fast-Slow Thinking for Large Vision-Language Model Reasoning☆20Updated 6 months ago
- [CVPR 2024] Prompt Highlighter: Interactive Control for Multi-Modal LLMs☆155Updated last year
- Doodling our way to AGI ✏️ 🖼️ 🧠☆112Updated 5 months ago
- [NeurIPS 2025] NoisyRollout: Reinforcing Visual Reasoning with Data Augmentation☆95Updated last month
- Official Repository of Personalized Visual Instruct Tuning☆32Updated 8 months ago
- The official code of "VL-Rethinker: Incentivizing Self-Reflection of Vision-Language Models with Reinforcement Learning" [NeurIPS25]☆164Updated 5 months ago
- VideoHallucer, The first comprehensive benchmark for hallucination detection in large video-language models (LVLMs)☆38Updated 3 weeks ago
- [NeurIPS 2024] Efficient Large Multi-modal Models via Visual Context Compression☆61Updated 8 months ago
- Official repository of 'ScaleCap: Inference-Time Scalable Image Captioning via Dual-Modality Debiasing’☆57Updated 4 months ago
- A Large-scale Dataset for training and evaluating model's ability on Dense Text Image Generation☆84Updated last month
- Empowering Unified MLLM with Multi-granular Visual Generation☆130Updated 9 months ago