phellonchen / Awesome-MLLM-ReasoningLinks
Latest Advances on Reasoning of Multimodal Large Language Models (Multimodal R1 \ Visual R1) ) π
β35Updated 9 months ago
Alternatives and similar repositories for Awesome-MLLM-Reasoning
Users that are interested in Awesome-MLLM-Reasoning are comparing it to the libraries listed below
Sorting:
- Paper collections of multi-modal LLM for Math/STEM/Code.β131Updated last month
- MMR1: Enhancing Multimodal Reasoning with Variance-Aware Sampling and Open Resourcesβ211Updated 3 months ago
- A Comprehensive Survey on Evaluating Reasoning Capabilities in Multimodal Large Language Models.β71Updated 9 months ago
- A Self-Training Framework for Vision-Language Reasoningβ88Updated 11 months ago
- ACL'2025: SoftCoT: Soft Chain-of-Thought for Efficient Reasoning with LLMs. and preprint: SoftCoT++: Test-Time Scaling with Soft Chain-ofβ¦β73Updated 7 months ago
- This repository will continuously update the latest papers, technical reports, benchmarks about multimodal reasoning!β53Updated 9 months ago
- An Easy-to-use, Scalable and High-performance RLHF Framework designed for Multimodal Models.β152Updated 2 months ago
- Official code of *Virgo: A Preliminary Exploration on Reproducing o1-like MLLM*β109Updated 7 months ago
- β112Updated 3 months ago
- Research works from Tencent AI Lab regarding self-evolving agentsβ74Updated 4 months ago
- β87Updated last year
- β176Updated 3 weeks ago
- The Next Step Forward in Multimodal LLM Alignmentβ193Updated 8 months ago
- This project aims to collect and collate various datasets for multimodal large model training, including but not limited to pre-training β¦β65Updated 7 months ago
- (ICLR'25) A Comprehensive Framework for Developing and Evaluating Multimodal Role-Playing Agentsβ89Updated 11 months ago
- Rethinking RL Scaling for Vision Language Models: A Transparent, From-Scratch Framework and Comprehensive Evaluation Schemeβ146Updated 8 months ago
- Extrapolating RLVR to General Domains without Verifiersβ186Updated 4 months ago
- This repository contains the code for SFT, RLHF, and DPO, designed for vision-based LLMs, including the LLaVA models and the LLaMA-3.2-viβ¦β118Updated 6 months ago
- Official repo for "PAPO: Perception-Aware Policy Optimization for Multimodal Reasoning"β109Updated 3 weeks ago
- MMSearch-R1 is an end-to-end RL framework that enables LMMs to perform on-demand, multi-turn search with real-world multimodal search tooβ¦β372Updated 4 months ago
- [NeurIPS 2025] NoisyRollout: Reinforcing Visual Reasoning with Data Augmentationβ102Updated 3 months ago
- A RLHF Infrastructure for Vision-Language Modelsβ191Updated last year
- This is for ACL 2025 Findings Paper: From Specific-MLLMs to Omni-MLLMs: A Survey on MLLMs Aligned with Multi-modalitiesModelsβ82Updated this week
- β129Updated last month
- Pre-trained, Scalable, High-performance Reward Models via Policy Discriminative Learning.β163Updated 3 months ago
- CPPO: Accelerating the Training of Group Relative Policy Optimization-Based Reasoning Models (NeurIPS 2025)β169Updated last month
- β152Updated 7 months ago
- β296Updated 5 months ago
- Offical Repository of "AtomThink: Multimodal Slow Thinking with Atomic Step Reasoning"β57Updated last month
- A Survey on Benchmarks of Multimodal Large Language Modelsβ145Updated 6 months ago