[ICLR2025] MMIU: Multimodal Multi-image Understanding for Evaluating Large Vision-Language Models
☆95Sep 14, 2024Updated last year
Alternatives and similar repositories for MMIU
Users that are interested in MMIU are comparing it to the libraries listed below
Sorting:
- Reinforcing Long-Term Performance in Recommender Systems with User-Oriented Exploration Policy (SIGIR 2024)☆14Oct 6, 2024Updated last year
- [NIPS2023]Implementation of Foundation Model is Efficient Multimodal Multitask Model Selector☆37Mar 7, 2024Updated 2 years ago
- Controllable Multi-Objective Re-ranking with Policy Hypernetworks (KDD 2023)☆38Oct 6, 2024Updated last year
- We introduce new approach, Token Reduction using CLIP Metric (TRIM), aimed at improving the efficiency of MLLMs without sacrificing their…☆21Jan 11, 2026Updated 2 months ago
- [ICML 2024] | MMT-Bench: A Comprehensive Multimodal Benchmark for Evaluating Large Vision-Language Models Towards Multitask AGI☆117Jul 18, 2024Updated last year
- Official repository of MMDU dataset☆104Sep 29, 2024Updated last year
- [SCIS 2024] The official implementation of the paper "MMInstruct: A High-Quality Multi-Modal Instruction Tuning Dataset with Extensive Di…☆62Nov 7, 2024Updated last year
- [NeurIPS 2025 Spotlight] FUDOKI: Discrete Flow-based Unified Understanding and Generation via Kinetic-Optimal Velocities☆72Dec 21, 2025Updated 2 months ago
- The implementation of UniSAR☆33Nov 26, 2024Updated last year
- Adapting LLaMA Decoder to Vision Transformer☆30May 20, 2024Updated last year
- Code for the paper: Metacognitive Retrieval-Augmented Large Language Models☆34Mar 1, 2024Updated 2 years ago
- Official InfiniBench: A Benchmark for Large Multi-Modal Models in Long-Form Movies and TV Shows☆19Nov 4, 2025Updated 4 months ago
- [ACL 2024] ChartAssistant is a chart-based vision-language model for universal chart comprehension and reasoning.☆132Sep 7, 2024Updated last year
- 【NeurIPS 2024】Dense Connector for MLLMs☆182Oct 14, 2024Updated last year
- Source code of "Improving Equivariant Graph Neural Networks on Large Geometric Graphs via Virtual Nodes Learning"☆30Jun 25, 2025Updated 8 months ago
- Official implementation of Next Block Prediction: Video Generation via Semi-Autoregressive Modeling☆42Feb 12, 2025Updated last year
- Code for CVPR 2024 Oral "Neural Lineage"☆17Jun 18, 2024Updated last year
- [CVPR 2024] The official implementation of paper "synthesize, diagnose, and optimize: towards fine-grained vision-language understanding"☆53Jun 16, 2025Updated 9 months ago
- Code for paper: "Executing Arithmetic: Fine-Tuning Large Language Models as Turing Machines"☆11Oct 11, 2024Updated last year
- 😎 Awesome papers on token redundancy reduction☆11Mar 12, 2025Updated last year
- [MM2024, oral] "Self-Supervised Visual Preference Alignment" https://arxiv.org/abs/2404.10501☆62Jul 26, 2024Updated last year
- The codebase for our EMNLP24 paper: Multimodal Self-Instruct: Synthetic Abstract Image and Visual Reasoning Instruction Using Language Mo…☆86Jan 27, 2025Updated last year
- ☆18Jul 10, 2024Updated last year
- Official code of *Virgo: A Preliminary Exploration on Reproducing o1-like MLLM*☆109May 27, 2025Updated 9 months ago
- X-Reasoner: Towards Generalizable Reasoning Across Modalities and Domains☆50Feb 4, 2026Updated last month
- [TACL] Do Vision and Language Models Share Concepts? A Vector Space Alignment Study☆16Nov 22, 2024Updated last year
- [CVPR 2024] DiffAgent: Fast and Accurate Text-to-Image API Selection with Large Language Model☆19Apr 16, 2024Updated last year
- BESA is a differentiable weight pruning technique for large language models.☆17Mar 4, 2024Updated 2 years ago
- Official Implementation of MDK12-Bench: A Multi-Discipline Benchmark for Evaluating Reasoning in Multimodal Large Language Models☆12Nov 1, 2025Updated 4 months ago
- [ICML2025] The code and data of Paper: Towards World Simulator: Crafting Physical Commonsense-Based Benchmark for Video Generation☆155Oct 25, 2024Updated last year
- Matryoshka Multimodal Models☆122Jan 22, 2025Updated last year
- [ICDE'24] Code of "Adapting Large Language Models by Integrating Collaborative Semantics for Recommendation."☆66Sep 9, 2024Updated last year
- [CVPR 2025] Mitigating Object Hallucinations in Large Vision-Language Models with Assembly of Global and Local Attention☆63Jul 16, 2024Updated last year
- VoCoT: Unleashing Visually Grounded Multi-Step Reasoning in Large Multi-Modal Models☆77Jul 13, 2024Updated last year
- A curated list of awesome exploration policy papers.☆13Jan 3, 2026Updated 2 months ago
- ✨✨The Curse of Multi-Modalities (CMM): Evaluating Hallucinations of Large Multimodal Models across Language, Visual, and Audio☆52Jul 11, 2025Updated 8 months ago
- [ICLR 2025] VL-ICL Bench: The Devil in the Details of Multimodal In-Context Learning☆70Sep 20, 2025Updated 6 months ago
- 🚀 [NeurIPS24] Make Vision Matter in Visual-Question-Answering (VQA)! Introducing NaturalBench, a vision-centric VQA benchmark (NeurIPS'2…☆89Jun 24, 2025Updated 8 months ago
- Official implementation of CVPR 2024 paper "Retrieval-Augmented Open-Vocabulary Object Detection".☆44Sep 12, 2024Updated last year