Video-R1: Reinforcing Video Reasoning in MLLMs [π₯the first paper to explore R1 for video]
β837Dec 14, 2025Updated 3 months ago
Alternatives and similar repositories for Video-R1
Users that are interested in Video-R1 are comparing it to the libraries listed below
Sorting:
- β¨First Open-Source R1-like Video-LLM [2025/02/18]β382Feb 23, 2025Updated last year
- [NIPS2025] VideoChat-R1 & R1.5: Enhancing Spatio-Temporal Perception and Reasoning via Reinforcement Fine-Tuningβ262Oct 18, 2025Updated 5 months ago
- R1-like Video-LLM for Temporal Groundingβ135Jun 20, 2025Updated 9 months ago
- A fork to add multimodal model training to open-r1β1,503Feb 8, 2025Updated last year
- Reinforcement Learning Tuning for VideoLLMs: Reward Design and Data Efficiencyβ62Jun 6, 2025Updated 9 months ago
- Official repository of 'Visual-RFT: Visual Reinforcement Fine-Tuning' & 'Visual-ARFT: Visual Agentic Reinforcement Fine-Tuning'ββ2,312Oct 29, 2025Updated 4 months ago
- TinyLLaVA-Video-R1: Towards Smaller LMMs for Video Reasoningβ115Dec 24, 2025Updated 2 months ago
- Solve Visual Understanding with Reinforced VLMsβ5,865Mar 12, 2026Updated last week
- Witness the aha moment of VLM with less than $3.β4,035May 19, 2025Updated 10 months ago
- β99Jun 23, 2025Updated 8 months ago
- Explore the Multimodal βAha Momentβ on 2B Modelβ624Mar 18, 2025Updated last year
- MM-EUREKA: Exploring the Frontiers of Multimodal Reasoning with Rule-based Reinforcement Learningβ772Sep 7, 2025Updated 6 months ago
- [ICLR2026] This is the first paper to explore how to effectively use R1-like RL for MLLMs and introduce Vision-R1, a reasoning MLLM thatβ¦β801Jan 26, 2026Updated last month
- EasyR1: An Efficient, Scalable, Multi-Modality RL Training Framework based on veRLβ4,721Mar 10, 2026Updated last week
- This repository provides valuable reference for researchers in the field of multimodality, please start your exploratory travel in RL-basβ¦β1,372Feb 26, 2026Updated 3 weeks ago
- Extend OpenRLHF to support LMM RL training for reproduction of DeepSeek-R1 on multimodal tasks.β843May 14, 2025Updated 10 months ago
- β4,591Sep 14, 2025Updated 6 months ago
- R1-onevision, a visual language model capable of deep CoT reasoning.β578Apr 13, 2025Updated 11 months ago
- Multimodal Chain-of-Thought Reasoning: A Comprehensive Surveyβ960Nov 14, 2025Updated 4 months ago
- Project Page For "Seg-Zero: Reasoning-Chain Guided Segmentation via Cognitive Reinforcement"β611Jan 17, 2026Updated 2 months ago
- Official repo and evaluation implementation of VSI-Benchβ682Aug 5, 2025Updated 7 months ago
- π§ VideoMind: A Chain-of-LoRA Agent for Temporal-Grounded Video Reasoning (ICLR 2026)β311Feb 8, 2026Updated last month
- SophiaVL-R1: Reinforcing MLLMs Reasoning with Thinking Rewardβ93Aug 8, 2025Updated 7 months ago
- π₯π₯π₯ [IEEE TCSVT] Latest Papers, Codes and Datasets on Vid-LLMs.β3,116Updated this week
- [ICLR2026] VideoChat-Flash: Hierarchical Compression for Long-Context Video Modelingβ511Nov 18, 2025Updated 4 months ago
- One-for-All Multimodal Evaluation Toolkit Across Text, Image, Video, and Audio Tasksβ3,888Mar 11, 2026Updated last week
- Official implementation of UnifiedReward & [NeurIPS 2025] UnifiedReward-Think & UnifiedReward-Flexβ740Mar 7, 2026Updated last week
- β¨β¨[CVPR 2025] Video-MME: The First-Ever Comprehensive Evaluation Benchmark of Multi-modal LLMs in Video Analysisβ732Dec 8, 2025Updated 3 months ago
- Frontier Multimodal Foundation Models for Image and Video Understandingβ1,122Aug 14, 2025Updated 7 months ago
- [ICLR & NeurIPS 2025] Repository for Show-o series, One Single Transformer to Unify Multimodal Understanding and Generation.β1,895Jan 8, 2026Updated 2 months ago
- Qwen3-VL is the multimodal large language model series developed by Qwen team, Alibaba Cloud.β18,671Jan 30, 2026Updated last month
- Open-source evaluation toolkit of large multi-modality models (LMMs), support 220+ LMMs, 80+ benchmarksβ3,920Updated this week
- This repo contains evaluation code for the paper "AV-Odyssey: Can Your Multimodal LLMs Really Understand Audio-Visual Information?"β31Dec 23, 2024Updated last year
- β1,155Nov 20, 2025Updated 4 months ago
- Long-RL: Scaling RL to Long Sequences (NeurIPS 2025)β700Sep 24, 2025Updated 5 months ago
- Kimi-VL: Mixture-of-Experts Vision-Language Model for Multimodal Reasoning, Long-Context Understanding, and Strong Agent Capabilitiesβ1,164Jul 15, 2025Updated 8 months ago
- SpaceR: The first MLLM empowered by SG-RLVR for video spatial reasoningβ107Jul 9, 2025Updated 8 months ago
- Seed1.5-VL, a vision-language foundation model designed to advance general-purpose multimodal understanding and reasoning, achieving statβ¦β1,555Jun 14, 2025Updated 9 months ago
- VideoLLaMA 2: Advancing Spatial-Temporal Modeling and Audio Understanding in Video-LLMsβ1,284Jan 23, 2025Updated last year