flageval-baai / CMMULinks
[IJCAI 2024] CMMU: A Benchmark for Chinese Multi-modal Multi-type Question Understanding and Reasoning
☆24Updated last year
Alternatives and similar repositories for CMMU
Users that are interested in CMMU are comparing it to the libraries listed below
Sorting:
- The huggingface implementation of Fine-grained Late-interaction Multi-modal Retriever.☆97Updated 3 months ago
- MMR1: Advancing the Frontiers of Multimodal Reasoning☆163Updated 6 months ago
- ☆74Updated last year
- [ACL'2024 Findings] GAOKAO-MM: A Chinese Human-Level Benchmark for Multimodal Models Evaluation☆65Updated last year
- A Self-Training Framework for Vision-Language Reasoning☆84Updated 7 months ago
- Offical Repository of "AtomThink: Multimodal Slow Thinking with Atomic Step Reasoning"☆55Updated last month
- ☆49Updated last year
- (ICLR'25) A Comprehensive Framework for Developing and Evaluating Multimodal Role-Playing Agents☆84Updated 7 months ago
- MMSearch-R1 is an end-to-end RL framework that enables LMMs to perform on-demand, multi-turn search with real-world multimodal search too…☆312Updated 3 weeks ago
- Scaling Preference Data Curation via Human-AI Synergy☆107Updated 2 months ago
- [NeurIPS 2024] Needle In A Multimodal Haystack (MM-NIAH): A comprehensive benchmark designed to systematically evaluate the capability of…☆115Updated 9 months ago
- Official Repository of MMLONGBENCH-DOC: Benchmarking Long-context Document Understanding with Visualizations☆97Updated last year
- ☆179Updated 7 months ago
- Code for Math-LLaVA: Bootstrapping Mathematical Reasoning for Multimodal Large Language Models☆89Updated last year
- Recent advancements propelled by large language models (LLMs), encompassing an array of domains including Vision, Audio, Agent, Robotics,…☆124Updated 3 months ago
- Official code of *Virgo: A Preliminary Exploration on Reproducing o1-like MLLM*☆107Updated 3 months ago
- The code and data of We-Math, accepted by ACL 2025 main conference.☆135Updated 3 weeks ago
- ☆90Updated last year
- ☆165Updated 4 months ago
- [NeurIPS 2024] MATH-Vision dataset and code to measure multimodal mathematical reasoning capabilities.☆117Updated 4 months ago
- ☆33Updated 8 months ago
- ☆213Updated last year
- [CVPR'24] RLHF-V: Towards Trustworthy MLLMs via Behavior Alignment from Fine-grained Correctional Human Feedback☆293Updated last year
- [EMNLP'25] Code for paper "MT-R1-Zero: Advancing LLM-based Machine Translation via R1-Zero-like Reinforcement Learning"☆56Updated 5 months ago
- [ICCV 2025] GUIOdyssey is a comprehensive dataset for training and evaluating cross-app navigation agents. GUIOdyssey consists of 8,834 e…☆128Updated last month
- [NeurIPS 2024] CharXiv: Charting Gaps in Realistic Chart Understanding in Multimodal LLMs☆125Updated 4 months ago
- ZO2 (Zeroth-Order Offloading): Full Parameter Fine-Tuning 175B LLMs with 18GB GPU Memory☆183Updated 2 months ago
- Paper collections of multi-modal LLM for Math/STEM/Code.☆127Updated last month
- Official implementation for "Android in the Zoo: Chain-of-Action-Thought for GUI Agents" (Findings of EMNLP 2024)☆91Updated 11 months ago
- An benchmark for evaluating the capabilities of large vision-language models (LVLMs)☆46Updated last year