turningpoint-ai / VisualThinker-R1-ZeroLinks
Explore the Multimodal “Aha Moment” on 2B Model
☆592Updated 3 months ago
Alternatives and similar repositories for VisualThinker-R1-Zero
Users that are interested in VisualThinker-R1-Zero are comparing it to the libraries listed below
Sorting:
- MM-EUREKA: Exploring the Frontiers of Multimodal Reasoning with Rule-based Reinforcement Learning☆654Updated 3 weeks ago
- This is the first paper to explore how to effectively use RL for MLLMs and introduce Vision-R1, a reasoning MLLM that leverages cold-sta…☆607Updated last week
- A fork to add multimodal model training to open-r1☆1,306Updated 4 months ago
- ☆504Updated this week
- R1-onevision, a visual language model capable of deep CoT reasoning.☆528Updated 2 months ago
- Video-R1: Reinforcing Video Reasoning in MLLMs [🔥the first paper to explore R1 for video]☆569Updated 3 weeks ago
- MM-Eureka V0 also called R1-Multimodal-Journey, Latest version is in MM-Eureka☆307Updated last month
- Extend OpenRLHF to support LMM RL training for reproduction of DeepSeek-R1 on multimodal tasks.☆770Updated last month
- This repository provides valuable reference for researchers in the field of multimodality, please start your exploratory travel in RL-bas…☆908Updated this week
- [Survey] Next Token Prediction Towards Multimodal Intelligence: A Comprehensive Survey☆446Updated 5 months ago
- Multimodal Chain-of-Thought Reasoning: A Comprehensive Survey☆655Updated last month
- ✨First Open-Source R1-like Video-LLM [2025/02/18]☆348Updated 3 months ago
- [CVPR'25 highlight] RLAIF-V: Open-Source AI Feedback Leads to Super GPT-4V Trustworthiness☆378Updated last month
- Kimi-VL: Mixture-of-Experts Vision-Language Model for Multimodal Reasoning, Long-Context Understanding, and Strong Agent Capabilities☆893Updated 2 months ago
- [Neurips'24 Spotlight] Visual CoT: Advancing Multi-Modal Language Models with a Comprehensive Dataset and Benchmark for Chain-of-Thought …☆331Updated 5 months ago
- Collect every awesome work about r1!☆386Updated last month
- [ACL 2025 🔥] Rethinking Step-by-step Visual Reasoning in LLMs☆302Updated last month
- Long Context Transfer from Language to Vision☆381Updated 3 months ago
- ☆363Updated 4 months ago
- Awesome RL-based LLM Reasoning☆520Updated last month
- ✨✨R1-Reward: Training Multimodal Reward Model Through Stable Reinforcement Learning☆144Updated last month
- A minimal codebase for finetuning large multimodal models, supporting llava-1.5/1.6, llava-interleave, llava-next-video, llava-onevision,…☆300Updated 3 months ago
- MMR1: Advancing the Frontiers of Multimodal Reasoning☆159Updated 3 months ago
- [CVPR'24] RLHF-V: Towards Trustworthy MLLMs via Behavior Alignment from Fine-grained Correctional Human Feedback☆281Updated 9 months ago
- [ICLR 2025 Spotlight] OmniCorpus: A Unified Multimodal Corpus of 10 Billion-Level Images Interleaved with Text☆365Updated last month
- 📖 This is a repository for organizing papers, codes and other resources related to unified multimodal models.☆578Updated 2 weeks ago
- Official implementation of UnifiedReward & UnifiedReward-Think☆417Updated this week
- [ECCV 2024 Oral] Code for paper: An Image is Worth 1/2 Tokens After Layer 2: Plug-and-Play Inference Acceleration for Large Vision-Langua…☆438Updated 5 months ago
- EVE Series: Encoder-Free Vision-Language Models from BAAI☆330Updated 3 months ago
- An Easy-to-use, Scalable and High-performance RLHF Framework designed for Multimodal Models.☆129Updated 2 months ago