[NeurIPS 2024] MATH-Vision dataset and code to measure multimodal mathematical reasoning capabilities.
☆131May 16, 2025Updated 10 months ago
Alternatives and similar repositories for MATH-V
Users that are interested in MATH-V are comparing it to the libraries listed below
Sorting:
- ☆13May 9, 2023Updated 2 years ago
- [ECCV 2024] Does Your Multi-modal LLM Truly See the Diagrams in Visual Math Problems?☆177Apr 28, 2025Updated 10 months ago
- MathVista: data, code, and evaluation for Mathematical Reasoning in Visual Contexts☆355Sep 29, 2025Updated 5 months ago
- Code for Math-LLaVA: Bootstrapping Mathematical Reasoning for Multimodal Large Language Models☆92Jun 28, 2024Updated last year
- ☆23Jul 5, 2024Updated last year
- ☆14Mar 11, 2024Updated 2 years ago
- [MathCoder, MathCoder-VL] Family of LLMs/LMMs for mathematical reasoning.☆336Oct 18, 2025Updated 5 months ago
- [SCIS 2024] The official implementation of the paper "MMInstruct: A High-Quality Multi-Modal Instruction Tuning Dataset with Extensive Di…☆62Nov 7, 2024Updated last year
- (ACL 2025) MAmmoTH-VL: Eliciting Multimodal Reasoning with Instruction Tuning at Scale☆49Jun 4, 2025Updated 9 months ago
- Official github repo of G-LLaVA☆148Feb 20, 2025Updated last year
- Code for paper "Super-CLEVR: A Virtual Benchmark to Diagnose Domain Robustness in Visual Reasoning"☆47Feb 19, 2026Updated last month
- [ICLR 2025] Mathematical Visual Instruction Tuning for Multi-modal Large Language Models☆153Dec 5, 2024Updated last year
- Code for "AVG-LLaVA: A Multimodal Large Model with Adaptive Visual Granularity"☆33Oct 12, 2024Updated last year
- Official repo for StableLLAVA☆95Dec 22, 2023Updated 2 years ago
- Improving word mover’s distance by leveraging self-attention matrix (Published in EMNLP 2023 Findings)☆10Mar 10, 2026Updated last week
- MTVQA: Benchmarking Multilingual Text-Centric Visual Question Answering. A comprehensive evaluation of multimodal large model multilingua…☆64May 15, 2025Updated 10 months ago
- [ICLR 2025] Is Your Model Really A Good Math Reasoner? Evaluating Mathematical Reasoning with Checklist☆35Oct 23, 2024Updated last year
- ☆85Jan 25, 2025Updated last year
- Sparkles: Unlocking Chats Across Multiple Images for Multimodal Instruction-Following Models☆45Jun 14, 2024Updated last year
- MME-CoT: Benchmarking Chain-of-Thought in LMMs for Reasoning Quality, Robustness, and Efficiency☆136Aug 5, 2025Updated 7 months ago
- [NeurIPS 2024] CharXiv: Charting Gaps in Realistic Chart Understanding in Multimodal LLMs☆142Apr 22, 2025Updated 10 months ago
- This is the Repository for Geometry Problem Solving Method Evaluation☆26Oct 8, 2024Updated last year
- [CVPR'24] RLHF-V: Towards Trustworthy MLLMs via Behavior Alignment from Fine-grained Correctional Human Feedback☆307Sep 11, 2024Updated last year
- Paper collections of multi-modal LLM for Math/STEM/Code.☆137Nov 17, 2025Updated 4 months ago
- ☆47Nov 8, 2024Updated last year
- ☆12Jul 4, 2024Updated last year
- Open-source evaluation toolkit of large multi-modality models (LMMs), support 220+ LMMs, 80+ benchmarks☆3,920Updated this week
- The code repository for "Wings: Learning Multimodal LLMs without Text-only Forgetting" [NeurIPS 2024]☆27Dec 28, 2024Updated last year
- VideoNIAH: A Flexible Synthetic Method for Benchmarking Video MLLMs☆55Mar 9, 2025Updated last year
- ☆27Jul 6, 2024Updated last year
- [EMNLP 22] UniGeo: Unifying Geometry Logical Reasoning via Reformulating Mathematical Expression☆33Dec 7, 2022Updated 3 years ago
- Holistic Coverage and Faithfulness Evaluation of Large Vision-Language Models (ACL-Findings 2024)☆16Apr 23, 2024Updated last year
- [ICML 2024 Oral] Official code repository for MLLM-as-a-Judge.☆91Feb 17, 2025Updated last year
- SuperCLUE-Math6:新一代中文原生多轮多步数学推理数据集的探索之旅☆58Feb 5, 2024Updated 2 years ago
- ☆14May 20, 2025Updated 10 months ago
- (AAAI 2024) BLIVA: A Simple Multimodal LLM for Better Handling of Text-rich Visual Questions☆260Apr 14, 2024Updated last year
- Dataset introduced in PlotQA: Reasoning over Scientific Plots☆84Jun 20, 2023Updated 2 years ago
- A lightweight flexible Video-MLLM developed by TencentQQ Multimedia Research Team.☆74Oct 14, 2024Updated last year
- [Neurips'24 Spotlight] Visual CoT: Advancing Multi-Modal Language Models with a Comprehensive Dataset and Benchmark for Chain-of-Thought …☆436Dec 22, 2024Updated last year