FanqingM / MM-Eureka-V0View external linksLinks
MM-Eureka V0 also called R1-Multimodal-Journey, Latest version is in MM-Eureka
☆324Jun 21, 2025Updated 7 months ago
Alternatives and similar repositories for MM-Eureka-V0
Users that are interested in MM-Eureka-V0 are comparing it to the libraries listed below
Sorting:
- A fork to add multimodal model training to open-r1☆1,474Feb 8, 2025Updated last year
- MM-EUREKA: Exploring the Frontiers of Multimodal Reasoning with Rule-based Reinforcement Learning☆768Sep 7, 2025Updated 5 months ago
- Extend OpenRLHF to support LMM RL training for reproduction of DeepSeek-R1 on multimodal tasks.☆840May 14, 2025Updated 9 months ago
- Explore the Multimodal “Aha Moment” on 2B Model☆623Mar 18, 2025Updated 10 months ago
- Witness the aha moment of VLM with less than $3.☆4,032May 19, 2025Updated 8 months ago
- ✨First Open-Source R1-like Video-LLM [2025/02/18]☆381Feb 23, 2025Updated 11 months ago
- Solve Visual Understanding with Reinforced VLMs☆5,841Oct 21, 2025Updated 3 months ago
- R1-onevision, a visual language model capable of deep CoT reasoning.☆575Apr 13, 2025Updated 10 months ago
- EasyR1: An Efficient, Scalable, Multi-Modality RL Training Framework based on veRL☆4,599Updated this week
- [CVPR2025 Highlight] Insight-V: Exploring Long-Chain Visual Reasoning with Multimodal Large Language Models☆233Nov 7, 2025Updated 3 months ago
- MMR1: Enhancing Multimodal Reasoning with Variance-Aware Sampling and Open Resources☆215Sep 26, 2025Updated 4 months ago
- Official repository of 'Visual-RFT: Visual Reinforcement Fine-Tuning' & 'Visual-ARFT: Visual Agentic Reinforcement Fine-Tuning'’☆2,319Oct 29, 2025Updated 3 months ago
- This repository provides valuable reference for researchers in the field of multimodality, please start your exploratory travel in RL-bas…☆1,350Dec 7, 2025Updated 2 months ago
- [ICLR2026] This is the first paper to explore how to effectively use R1-like RL for MLLMs and introduce Vision-R1, a reasoning MLLM that…☆765Jan 26, 2026Updated 3 weeks ago
- Open-source evaluation toolkit of large multi-modality models (LMMs), support 220+ LMMs, 80+ benchmarks☆3,816Updated this week
- R1-Vision: Let's first take a look at the image☆48Feb 16, 2025Updated last year
- [NeurIPS 2025] NoisyRollout: Reinforcing Visual Reasoning with Data Augmentation☆105Sep 18, 2025Updated 4 months ago
- Official code of *Virgo: A Preliminary Exploration on Reproducing o1-like MLLM*☆109May 27, 2025Updated 8 months ago
- A Self-Training Framework for Vision-Language Reasoning☆88Jan 23, 2025Updated last year
- [CVPR'24] RLHF-V: Towards Trustworthy MLLMs via Behavior Alignment from Fine-grained Correctional Human Feedback☆306Sep 11, 2024Updated last year
- ☆111Jan 8, 2025Updated last year
- MM-PRM: Enhancing Multimodal Mathematical Reasoning with Scalable Step-Level Supervision☆27May 26, 2025Updated 8 months ago
- ☆4,562Sep 14, 2025Updated 5 months ago
- An Easy-to-use, Scalable and High-performance RLHF Framework designed for Multimodal Models.☆156Jan 5, 2026Updated last month
- Official Repo for Open-Reasoner-Zero☆2,085Jun 2, 2025Updated 8 months ago
- Official repo for "PAPO: Perception-Aware Policy Optimization for Multimodal Reasoning"☆116Feb 4, 2026Updated last week
- Harnessing 1.4M GPT4V-synthesized Data for A Lite Vision-Language Model☆281Jun 25, 2024Updated last year
- Seed1.5-VL, a vision-language foundation model designed to advance general-purpose multimodal understanding and reasoning, achieving stat…☆1,544Jun 14, 2025Updated 8 months ago
- [Neurips'24 Spotlight] Visual CoT: Advancing Multi-Modal Language Models with a Comprehensive Dataset and Benchmark for Chain-of-Thought …☆424Dec 22, 2024Updated last year
- Resources and paper list for "Thinking with Images for LVLMs". This repository accompanies our survey on how LVLMs can leverage visual in…☆1,329Feb 3, 2026Updated 2 weeks ago
- Cambrian-1 is a family of multimodal LLMs with a vision-centric design.☆1,986Nov 7, 2025Updated 3 months ago
- Kimi-VL: Mixture-of-Experts Vision-Language Model for Multimodal Reasoning, Long-Context Understanding, and Strong Agent Capabilities☆1,156Jul 15, 2025Updated 7 months ago
- 🔥Awesome Multimodal Large Language Models Paper List☆154Mar 12, 2025Updated 11 months ago
- Video-R1: Reinforcing Video Reasoning in MLLMs [🔥the first paper to explore R1 for video]☆820Dec 14, 2025Updated 2 months ago
- Paper collections of multi-modal LLM for Math/STEM/Code.☆136Nov 17, 2025Updated 2 months ago
- [ICCV 2025] LLaVA-CoT, a visual language model capable of spontaneous, systematic reasoning☆2,125Dec 12, 2025Updated 2 months ago
- A RLHF Infrastructure for Vision-Language Models☆196Nov 15, 2024Updated last year
- Simple RL training for reasoning☆3,827Dec 23, 2025Updated last month
- [NeurIPS 2024] Calibrated Self-Rewarding Vision Language Models☆85Oct 26, 2025Updated 3 months ago