Wild-Cooperation-Hub / Awesome-MLLM-Reasoning-BenchmarksView external linksLinks
A Comprehensive Survey on Evaluating Reasoning Capabilities in Multimodal Large Language Models.
☆71Mar 18, 2025Updated 10 months ago
Alternatives and similar repositories for Awesome-MLLM-Reasoning-Benchmarks
Users that are interested in Awesome-MLLM-Reasoning-Benchmarks are comparing it to the libraries listed below
Sorting:
- Collections of Papers and Projects for Multimodal Reasoning.☆107Apr 25, 2025Updated 9 months ago
- This repository provides valuable reference for researchers in the field of multimodality, please start your exploratory travel in RL-bas…☆1,350Dec 7, 2025Updated 2 months ago
- 【ICME2025 Oral】Offical Pytorch Code for "Fraesormer: Learning Adaptive Sparse Transformer for Efficient Food Recognition"☆11Mar 21, 2025Updated 10 months ago
- Recent Advances on MLLM's Reasoning Ability☆26Apr 11, 2025Updated 10 months ago
- 【ICME2025 Oral】 Offical Pytorch Code for "Learning Dual-Domain Multi-Scale Representations for Single Image Deraining"☆16Mar 21, 2025Updated 10 months ago
- [SCIS 2024] The official implementation of the paper "MMInstruct: A High-Quality Multi-Modal Instruction Tuning Dataset with Extensive Di…☆62Nov 7, 2024Updated last year
- 🔥🔥MLVU: Multi-task Long Video Understanding Benchmark☆241Aug 21, 2025Updated 5 months ago
- On Path to Multimodal Generalist: General-Level and General-Bench☆18Jul 11, 2025Updated 7 months ago
- [Arxiv 2024] Official code for MMTrail: A Multimodal Trailer Video Dataset with Language and Music Descriptions☆33Feb 6, 2025Updated last year
- R1-onevision, a visual language model capable of deep CoT reasoning.☆575Apr 13, 2025Updated 10 months ago
- MMR1: Enhancing Multimodal Reasoning with Variance-Aware Sampling and Open Resources☆215Sep 26, 2025Updated 4 months ago
- ☆15May 23, 2022Updated 3 years ago
- 👾 E.T. Bench: Towards Open-Ended Event-Level Video-Language Understanding (NeurIPS 2024)☆74Jan 20, 2025Updated last year
- A collection of awesome think with videos papers.☆89Dec 1, 2025Updated 2 months ago
- Latest Advances on (RL based) Multimodal Reasoning and Generation in Multimodal Large Language Models☆47Oct 30, 2025Updated 3 months ago
- ☆20Feb 5, 2026Updated last week
- Video Benchmark Suite: Rapid Evaluation of Video Foundation Models☆15Jan 10, 2025Updated last year
- [ICLR 2025] Code and Data Repo for Paper "Latent Space Chain-of-Embedding Enables Output-free LLM Self-Evaluation"☆93Dec 19, 2024Updated last year
- We introduce new approach, Token Reduction using CLIP Metric (TRIM), aimed at improving the efficiency of MLLMs without sacrificing their…☆20Jan 11, 2026Updated last month
- ☆21Jul 9, 2025Updated 7 months ago
- ☆17Feb 22, 2024Updated last year
- Multimodal Chain-of-Thought Reasoning: A Comprehensive Survey☆957Nov 14, 2025Updated 3 months ago
- Controllable mage captioning model with unsupervised modes☆21Apr 14, 2023Updated 2 years ago
- Extend OpenRLHF to support LMM RL training for reproduction of DeepSeek-R1 on multimodal tasks.☆840May 14, 2025Updated 9 months ago
- This repository is the official implementation of "Look-Back: Implicit Visual Re-focusing in MLLM Reasoning".☆84Jul 10, 2025Updated 7 months ago
- GPT as a Monte Carlo Language Tree: A Probabilistic Perspective☆45Jan 18, 2025Updated last year
- Website for MathVista☆21Jun 9, 2025Updated 8 months ago
- ☆37Nov 26, 2025Updated 2 months ago
- Cluster Document for IIL@HIT☆20Apr 5, 2023Updated 2 years ago
- Resources and paper list for "Thinking with Images for LVLMs". This repository accompanies our survey on how LVLMs can leverage visual in…☆1,329Feb 3, 2026Updated 2 weeks ago
- Reinforcement Learning Tuning for VideoLLMs: Reward Design and Data Efficiency☆60Jun 6, 2025Updated 8 months ago
- A Survey on Benchmarks of Multimodal Large Language Models☆148Jul 1, 2025Updated 7 months ago
- A fork to add multimodal model training to open-r1☆1,474Feb 8, 2025Updated last year
- R1-Vision: Let's first take a look at the image☆48Feb 16, 2025Updated last year
- Rex-Thinker: Grounded Object Refering via Chain-of-Thought Reasoning☆141Jun 30, 2025Updated 7 months ago
- Official code for the paper "Contrast and Classify: Training Robust VQA Models" published at ICCV, 2021☆19Jul 27, 2021Updated 4 years ago
- Official implementation of "Open-o3 Video: Grounded Video Reasoning with Explicit Spatio-Temporal Evidence"☆132Dec 18, 2025Updated last month
- Code for "CAFe: Unifying Representation and Generation with Contrastive-Autoregressive Finetuning"☆32Mar 26, 2025Updated 10 months ago
- Evaluating Knowledge Acquisition from Multi-Discipline Professional Videos☆66Sep 5, 2025Updated 5 months ago