Visual-Agent / DeepEyesLinks
☆504Updated this week
Alternatives and similar repositories for DeepEyes
Users that are interested in DeepEyes are comparing it to the libraries listed below
Sorting:
- MM-Eureka V0 also called R1-Multimodal-Journey, Latest version is in MM-Eureka☆307Updated this week
- MM-EUREKA: Exploring the Frontiers of Multimodal Reasoning with Rule-based Reinforcement Learning☆665Updated 3 weeks ago
- This is the first paper to explore how to effectively use RL for MLLMs and introduce Vision-R1, a reasoning MLLM that leverages cold-sta…☆613Updated last week
- R1-onevision, a visual language model capable of deep CoT reasoning.☆528Updated 2 months ago
- Explore the Multimodal “Aha Moment” on 2B Model☆594Updated 3 months ago
- ✨First Open-Source R1-like Video-LLM [2025/02/18]☆348Updated 4 months ago
- MMR1: Advancing the Frontiers of Multimodal Reasoning☆160Updated 3 months ago
- Extend OpenRLHF to support LMM RL training for reproduction of DeepSeek-R1 on multimodal tasks.☆776Updated last month
- This repository provides valuable reference for researchers in the field of multimodality, please start your exploratory travel in RL-bas…☆922Updated last week
- An Easy-to-use, Scalable and High-performance RLHF Framework designed for Multimodal Models.☆130Updated 2 months ago
- Rethinking RL Scaling for Vision Language Models: A Transparent, From-Scratch Framework and Comprehensive Evaluation Scheme☆131Updated 2 months ago
- ✨✨R1-Reward: Training Multimodal Reward Model Through Stable Reinforcement Learning☆149Updated last month
- Resources and paper list for "Thinking with Images for LVLMs". This repository accompanies our survey on how LVLMs can leverage visual in…☆358Updated this week
- [CVPR'24] RLHF-V: Towards Trustworthy MLLMs via Behavior Alignment from Fine-grained Correctional Human Feedback☆281Updated 9 months ago
- Video-R1: Reinforcing Video Reasoning in MLLMs [🔥the first paper to explore R1 for video]☆577Updated 3 weeks ago
- [ICLR 2025 Spotlight] OmniCorpus: A Unified Multimodal Corpus of 10 Billion-Level Images Interleaved with Text☆368Updated last month
- Multimodal Chain-of-Thought Reasoning: A Comprehensive Survey☆663Updated this week
- The Next Step Forward in Multimodal LLM Alignment☆164Updated last month
- Open-Qwen2VL: Compute-Efficient Pre-Training of Fully-Open Multimodal LLMs on Academic Resources☆229Updated last month
- Official code implementation of Perception R1: Pioneering Perception Policy with Reinforcement Learning☆206Updated 2 weeks ago
- Reading notes about Multimodal Large Language Models, Large Language Models, and Diffusion Models☆445Updated 2 weeks ago
- A fork to add multimodal model training to open-r1☆1,309Updated 4 months ago
- Official Repo of "MMBench: Is Your Multi-modal Model an All-around Player?"☆224Updated last month
- [CVPR'25 highlight] RLAIF-V: Open-Source AI Feedback Leads to Super GPT-4V Trustworthiness☆378Updated last month
- [Neurips'24 Spotlight] Visual CoT: Advancing Multi-Modal Language Models with a Comprehensive Dataset and Benchmark for Chain-of-Thought …☆331Updated 6 months ago
- [Survey] Next Token Prediction Towards Multimodal Intelligence: A Comprehensive Survey☆445Updated 5 months ago
- Official implementation of UnifiedReward & UnifiedReward-Think☆429Updated last week
- GPG: A Simple and Strong Reinforcement Learning Baseline for Model Reasoning☆142Updated last month
- Agent-R1: Training Powerful LLM Agents with End-to-End Reinforcement Learning☆573Updated 3 weeks ago
- This repo contains the code for "VLM2Vec: Training Vision-Language Models for Massive Multimodal Embedding Tasks" [ICLR25]☆267Updated last week