ZrrSkywalker / MathVerseLinks
[ECCV 2024] Does Your Multi-modal LLM Truly See the Diagrams in Visual Math Problems?
☆170Updated 5 months ago
Alternatives and similar repositories for MathVerse
Users that are interested in MathVerse are comparing it to the libraries listed below
Sorting:
- [ICLR 2025] Mathematical Visual Instruction Tuning for Multi-modal Large Language Models☆152Updated 10 months ago
- (AAAI 2024) BLIVA: A Simple Multimodal LLM for Better Handling of Text-rich Visual Questions☆258Updated last year
- Chain-of-Spot: Interactive Reasoning Improves Large Vision-language Models☆98Updated last year
- An open-source implementation for training LLaVA-NeXT.☆423Updated last year
- Official Repository of ChartX & ChartVLM: A Versatile Benchmark and Foundation Model for Complicated Chart Reasoning☆242Updated last year
- ✨✨R1-Reward: Training Multimodal Reward Model Through Stable Reinforcement Learning☆265Updated 5 months ago
- 🚀 [NeurIPS24] Make Vision Matter in Visual-Question-Answering (VQA)! Introducing NaturalBench, a vision-centric VQA benchmark (NeurIPS'2…☆88Updated 4 months ago
- (ECCV 2024) Empowering Multimodal Large Language Model as a Powerful Data Generator☆114Updated 7 months ago
- [NeurIPS 2025] Efficient Reasoning Vision Language Models☆407Updated last month
- The code for "TokenPacker: Efficient Visual Projector for Multimodal LLM", IJCV2025☆270Updated 5 months ago
- Official repository of "Beyond Fixed: Training-Free Variable-Length Denoising for Diffusion Large Language Models"☆140Updated last month
- [ICLR 2025] Vision-Centric Evaluation for Retrieval-Augmented Multimodal Models☆60Updated 9 months ago
- R1-like Computer-use Agent☆86Updated 7 months ago
- Reverse Chain-of-Thought Problem Generation for Geometric Reasoning in Large Multimodal Models☆180Updated 11 months ago
- Recipes to train the self-rewarding reasoning LLMs.☆226Updated 7 months ago
- Official code of paper "Beyond 'Aha!': Toward Systematic Meta-Abilities Alignment in Large Reasoning Models"☆81Updated 4 months ago
- ✨✨Long-VITA: Scaling Large Multi-modal Models to 1 Million Tokens with Leading Short-Context Accuracy☆301Updated 5 months ago
- Official code of *Virgo: A Preliminary Exploration on Reproducing o1-like MLLM*☆109Updated 4 months ago
- ☆398Updated 10 months ago
- ☆95Updated 9 months ago
- A curated collection of resources, tools, and frameworks for developing GUI Agents.☆160Updated last week
- ☆352Updated last year
- [NeurIPS 2024] Needle In A Multimodal Haystack (MM-NIAH): A comprehensive benchmark designed to systematically evaluate the capability of…☆115Updated 11 months ago
- [ICLR 2025] MLLM for On-Demand Spatial-Temporal Understanding at Arbitrary Resolution☆327Updated 3 months ago
- The official code of "VL-Rethinker: Incentivizing Self-Reflection of Vision-Language Models with Reinforcement Learning" [NeurIPS25]☆157Updated 4 months ago
- [COLM'25] Official implementation of the Law of Vision Representation in MLLMs☆168Updated 2 weeks ago
- [ICCV 2025 Highlight] The official repository for "2.5 Years in Class: A Multimodal Textbook for Vision-Language Pretraining"☆174Updated 7 months ago
- Tree Search for LLM Agent Reinforcement Learning☆229Updated 3 weeks ago
- [NeurIPS 2024] Matryoshka Query Transformer for Large Vision-Language Models☆118Updated last year
- [ICML 2025 Oral] An official implementation of VideoRoPE & VideoRoPE++☆200Updated 2 months ago