Osilly / Vision-R1Links
This is the first paper to explore how to effectively use RL for MLLMs and introduce Vision-R1, a reasoning MLLM that leverages cold-start initialization and RL training to incentivize reasoning capability.
☆607Updated last week
Alternatives and similar repositories for Vision-R1
Users that are interested in Vision-R1 are comparing it to the libraries listed below
Sorting:
- MM-EUREKA: Exploring the Frontiers of Multimodal Reasoning with Rule-based Reinforcement Learning☆654Updated 3 weeks ago
- R1-onevision, a visual language model capable of deep CoT reasoning.☆528Updated 2 months ago
- This repository provides valuable reference for researchers in the field of multimodality, please start your exploratory travel in RL-bas…☆908Updated this week
- ☆504Updated this week
- Explore the Multimodal “Aha Moment” on 2B Model☆592Updated 3 months ago
- Multimodal Chain-of-Thought Reasoning: A Comprehensive Survey☆655Updated last month
- MM-Eureka V0 also called R1-Multimodal-Journey, Latest version is in MM-Eureka☆307Updated last month
- A fork to add multimodal model training to open-r1☆1,306Updated 4 months ago
- Extend OpenRLHF to support LMM RL training for reproduction of DeepSeek-R1 on multimodal tasks.☆770Updated last month
- Video-R1: Reinforcing Video Reasoning in MLLMs [🔥the first paper to explore R1 for video]☆569Updated 3 weeks ago
- ✨First Open-Source R1-like Video-LLM [2025/02/18]☆348Updated 3 months ago
- [Neurips'24 Spotlight] Visual CoT: Advancing Multi-Modal Language Models with a Comprehensive Dataset and Benchmark for Chain-of-Thought …☆331Updated 5 months ago
- [Survey] Next Token Prediction Towards Multimodal Intelligence: A Comprehensive Survey☆446Updated 5 months ago
- Project Page For "Seg-Zero: Reasoning-Chain Guided Segmentation via Cognitive Reinforcement"☆418Updated last week
- Reading notes about Multimodal Large Language Models, Large Language Models, and Diffusion Models☆445Updated 2 weeks ago
- Resources and paper list for "Thinking with Images for LVLMs". This repository accompanies our survey on how LVLMs can leverage visual in…☆291Updated this week
- [ECCV 2024 Oral] Code for paper: An Image is Worth 1/2 Tokens After Layer 2: Plug-and-Play Inference Acceleration for Large Vision-Langua…☆438Updated 5 months ago
- Efficient Multimodal Large Language Models: A Survey☆355Updated last month
- A paper list of some recent works about Token Compress for Vit and VLM☆510Updated 2 weeks ago
- Official implementation of UnifiedReward & UnifiedReward-Think☆417Updated this week
- [ICLR 2025 Spotlight] OmniCorpus: A Unified Multimodal Corpus of 10 Billion-Level Images Interleaved with Text☆365Updated last month
- The Next Step Forward in Multimodal LLM Alignment☆164Updated last month
- [CVPR'24] RLHF-V: Towards Trustworthy MLLMs via Behavior Alignment from Fine-grained Correctional Human Feedback☆281Updated 9 months ago
- ☆363Updated 4 months ago
- [CVPR'25 highlight] RLAIF-V: Open-Source AI Feedback Leads to Super GPT-4V Trustworthiness☆378Updated last month
- 📖 A curated list of resources dedicated to hallucination of multimodal large language models (MLLM).☆721Updated 2 months ago
- [ICLR 2025] LLaVA-MoD: Making LLaVA Tiny via MoE-Knowledge Distillation☆172Updated 2 months ago
- An Easy-to-use, Scalable and High-performance RLHF Framework designed for Multimodal Models.☆129Updated 2 months ago
- Official code implementation of Perception R1: Pioneering Perception Policy with Reinforcement Learning☆206Updated 2 weeks ago
- This repo contains the code for "VLM2Vec: Training Vision-Language Models for Massive Multimodal Embedding Tasks" [ICLR25]☆253Updated last week