CSfufu / Revisual-R1Links
🚀ReVisual-R1 is a 7B open-source multimodal language model that follows a three-stage curriculum—cold-start pre-training, multimodal reinforcement learning, and text-only reinforcement learning—to achieve faithful, concise, and self-reflective state-of-the-art performance in visual and textual reasoning.
☆141Updated 2 weeks ago
Alternatives and similar repositories for Revisual-R1
Users that are interested in Revisual-R1 are comparing it to the libraries listed below
Sorting:
- Official code of *Virgo: A Preliminary Exploration on Reproducing o1-like MLLM*☆104Updated last month
- OpenThinkIMG is an end-to-end open-source framework that empowers LVLMs to think with images.☆231Updated 3 weeks ago
- Rethinking RL Scaling for Vision Language Models: A Transparent, From-Scratch Framework and Comprehensive Evaluation Scheme☆131Updated 2 months ago
- ☆112Updated this week
- The official code of "VL-Rethinker: Incentivizing Self-Reflection of Vision-Language Models with Reinforcement Learning"☆119Updated 3 weeks ago
- Exploring the Limit of Outcome Reward for Learning Mathematical Reasoning☆186Updated 3 months ago
- OpenVLThinker: An Early Exploration to Vision-Language Reasoning via Iterative Self-Improvement☆91Updated last month
- ☆172Updated this week
- An Easy-to-use, Scalable and High-performance RLHF Framework designed for Multimodal Models.☆130Updated 2 months ago
- MMR1: Advancing the Frontiers of Multimodal Reasoning☆160Updated 3 months ago
- ☆84Updated 2 weeks ago
- ✨✨R1-Reward: Training Multimodal Reward Model Through Stable Reinforcement Learning☆150Updated last month
- L1: Controlling How Long A Reasoning Model Thinks With Reinforcement Learning☆222Updated last month
- [CVPR2025 Highlight] Insight-V: Exploring Long-Chain Visual Reasoning with Multimodal Large Language Models☆202Updated 2 months ago
- CPPO: Accelerating the Training of Group Relative Policy Optimization-Based Reasoning Models☆138Updated 3 weeks ago
- ZeroGUI: Automating Online GUI Learning at Zero Human Cost☆63Updated this week
- [ICLR2025 Oral] ChartMoE: Mixture of Diversely Aligned Expert Connector for Chart Understanding☆84Updated 2 months ago
- The official repository for "2.5 Years in Class: A Multimodal Textbook for Vision-Language Pretraining"☆162Updated 3 months ago
- [ACL 2025] A Generalizable and Purely Unsupervised Self-Training Framework☆63Updated 3 weeks ago
- The official repo of One RL to See Them All: Visual Triple Unified Reinforcement Learning☆275Updated 3 weeks ago
- SFT or RL? An Early Investigation into Training R1-Like Reasoning Large Vision-Language Models☆123Updated 2 months ago
- Benchmark and research code for the paper SWEET-RL Training Multi-Turn LLM Agents onCollaborative Reasoning Tasks☆219Updated last month
- Official implementation of GUI-R1 : A Generalist R1-Style Vision-Language Action Model For GUI Agents☆125Updated last month
- Scaling Computer-Use Grounding via UI Decomposition and Synthesis☆79Updated last week
- [ACL 2025] Code and data for OS-Genesis: Automating GUI Agent Trajectory Construction via Reverse Task Synthesis☆141Updated this week
- A Self-Training Framework for Vision-Language Reasoning☆80Updated 5 months ago
- ☆303Updated 2 weeks ago
- ☆242Updated last month
- [NeurIPS 2024] Needle In A Multimodal Haystack (MM-NIAH): A comprehensive benchmark designed to systematically evaluate the capability of…☆117Updated 7 months ago
- General Reasoner: Advancing LLM Reasoning Across All Domains☆142Updated 2 weeks ago