Video-R1: Reinforcing Video Reasoning in MLLMs [π₯the first paper to explore R1 for video]
β820Dec 14, 2025Updated 2 months ago
Alternatives and similar repositories for Video-R1
Users that are interested in Video-R1 are comparing it to the libraries listed below
Sorting:
- β¨First Open-Source R1-like Video-LLM [2025/02/18]β381Feb 23, 2025Updated last year
- [NIPS2025] VideoChat-R1 & R1.5: Enhancing Spatio-Temporal Perception and Reasoning via Reinforcement Fine-Tuningβ259Oct 18, 2025Updated 4 months ago
- A fork to add multimodal model training to open-r1β1,484Feb 8, 2025Updated last year
- R1-like Video-LLM for Temporal Groundingβ133Jun 20, 2025Updated 8 months ago
- Official repository of 'Visual-RFT: Visual Reinforcement Fine-Tuning' & 'Visual-ARFT: Visual Agentic Reinforcement Fine-Tuning'ββ2,320Oct 29, 2025Updated 4 months ago
- Solve Visual Understanding with Reinforced VLMsβ5,845Oct 21, 2025Updated 4 months ago
- Reinforcement Learning Tuning for VideoLLMs: Reward Design and Data Efficiencyβ60Jun 6, 2025Updated 8 months ago
- TinyLLaVA-Video-R1: Towards Smaller LMMs for Video Reasoningβ114Dec 24, 2025Updated 2 months ago
- β98Jun 23, 2025Updated 8 months ago
- Witness the aha moment of VLM with less than $3.β4,032May 19, 2025Updated 9 months ago
- Explore the Multimodal βAha Momentβ on 2B Modelβ623Mar 18, 2025Updated 11 months ago
- [ICLR2026] This is the first paper to explore how to effectively use R1-like RL for MLLMs and introduce Vision-R1, a reasoning MLLM thatβ¦β767Jan 26, 2026Updated last month
- MM-EUREKA: Exploring the Frontiers of Multimodal Reasoning with Rule-based Reinforcement Learningβ769Sep 7, 2025Updated 5 months ago
- Extend OpenRLHF to support LMM RL training for reproduction of DeepSeek-R1 on multimodal tasks.β840May 14, 2025Updated 9 months ago
- EasyR1: An Efficient, Scalable, Multi-Modality RL Training Framework based on veRLβ4,621Feb 10, 2026Updated 2 weeks ago
- This repository provides valuable reference for researchers in the field of multimodality, please start your exploratory travel in RL-basβ¦β1,351Dec 7, 2025Updated 2 months ago
- β4,566Sep 14, 2025Updated 5 months ago
- R1-onevision, a visual language model capable of deep CoT reasoning.β575Apr 13, 2025Updated 10 months ago
- Official repo and evaluation implementation of VSI-Benchβ673Aug 5, 2025Updated 6 months ago
- Multimodal Chain-of-Thought Reasoning: A Comprehensive Surveyβ956Nov 14, 2025Updated 3 months ago
- Project Page For "Seg-Zero: Reasoning-Chain Guided Segmentation via Cognitive Reinforcement"β603Jan 17, 2026Updated last month
- Official implementation of UnifiedReward & [NeurIPS 2025] UnifiedReward-Think & UnifiedReward-Flexβ706Feb 10, 2026Updated 2 weeks ago
- π§ VideoMind: A Chain-of-LoRA Agent for Temporal-Grounded Video Reasoning (ICLR 2026)β305Feb 8, 2026Updated 2 weeks ago
- [ICLR & NeurIPS 2025] Repository for Show-o series, One Single Transformer to Unify Multimodal Understanding and Generation.β1,875Jan 8, 2026Updated last month
- β¨β¨[CVPR 2025] Video-MME: The First-Ever Comprehensive Evaluation Benchmark of Multi-modal LLMs in Video Analysisβ731Dec 8, 2025Updated 2 months ago
- Frontier Multimodal Foundation Models for Image and Video Understandingβ1,105Aug 14, 2025Updated 6 months ago
- π₯π₯π₯ [IEEE TCSVT] Latest Papers, Codes and Datasets on Vid-LLMs.β3,087Dec 20, 2025Updated 2 months ago
- β1,129Nov 20, 2025Updated 3 months ago
- One-for-All Multimodal Evaluation Toolkit Across Text, Image, Video, and Audio Tasksβ3,707Updated this week
- Open-source evaluation toolkit of large multi-modality models (LMMs), support 220+ LMMs, 80+ benchmarksβ3,845Updated this week
- Long-RL: Scaling RL to Long Sequences (NeurIPS 2025)β693Sep 24, 2025Updated 5 months ago
- [ICLR2026] VideoChat-Flash: Hierarchical Compression for Long-Context Video Modelingβ510Nov 18, 2025Updated 3 months ago
- Kimi-VL: Mixture-of-Experts Vision-Language Model for Multimodal Reasoning, Long-Context Understanding, and Strong Agent Capabilitiesβ1,163Jul 15, 2025Updated 7 months ago
- SophiaVL-R1: Reinforcing MLLMs Reasoning with Thinking Rewardβ92Aug 8, 2025Updated 6 months ago
- [CVPR 2024 π₯] Grounding Large Multimodal Model (GLaMM), the first-of-its-kind model capable of generating natural language responses thaβ¦β945Aug 5, 2025Updated 6 months ago
- Qwen3-VL is the multimodal large language model series developed by Qwen team, Alibaba Cloud.β18,386Jan 30, 2026Updated last month
- Seed1.5-VL, a vision-language foundation model designed to advance general-purpose multimodal understanding and reasoning, achieving statβ¦β1,548Jun 14, 2025Updated 8 months ago
- [CVPR 2025] The First Investigation of CoT Reasoning (RL, TTS, Reflection) in Image Generationβ855May 23, 2025Updated 9 months ago
- MMaDA - Open-Sourced Multimodal Large Diffusion Language Models (dLLMs with block diffusion, mixed-CoT, unified RL)β1,578Feb 14, 2026Updated 2 weeks ago