weijiawu / Awesome-Visual-Reinforcement-LearningLinks
📖 This is a repository for organizing papers, codes and other resources related to Visual Reinforcement Learning.
☆375Updated this week
Alternatives and similar repositories for Awesome-Visual-Reinforcement-Learning
Users that are interested in Awesome-Visual-Reinforcement-Learning are comparing it to the libraries listed below
Sorting:
- A paper list for spatial reasoning☆588Updated 2 weeks ago
- Cambrian-S: Towards Spatial Supersensing in Video☆468Updated 2 weeks ago
- MetaSpatial leverages reinforcement learning to enhance 3D spatial reasoning in vision-language models (VLMs), enabling more structured, …☆198Updated 8 months ago
- Official repo and evaluation implementation of VSI-Bench☆658Updated 5 months ago
- Machine Mental Imagery: Empower Multimodal Reasoning with Latent Visual Tokens (arXiv 2025)☆222Updated 5 months ago
- A list of works on video generation towards world model☆313Updated this week
- Pixel-Level Reasoning Model trained with RL [NeuIPS25]☆257Updated 2 months ago
- [NeurIPS 2025] Official Repo of Omni-R1: Reinforcement Learning for Omnimodal Reasoning via Two-System Collaboration☆105Updated last month
- [ICLR 2025] VILA-U: a Unified Foundation Model Integrating Visual Understanding and Generation☆411Updated 8 months ago
- Official repository of "GoT: Unleashing Reasoning Capability of Multimodal Large Language Model for Visual Generation and Editing"☆301Updated 3 months ago
- SpaceR: The first MLLM empowered by SG-RLVR for video spatial reasoning☆102Updated 6 months ago
- ☆112Updated 5 months ago
- [NIPS2025] VideoChat-R1 & R1.5: Enhancing Spatio-Temporal Perception and Reasoning via Reinforcement Fine-Tuning☆252Updated 2 months ago
- This is a collection of recent papers on reasoning in video generation models.☆91Updated last week
- [NeurIPS 2025]⭐️ Reason-RFT: Reinforcement Fine-Tuning for Visual Reasoning.☆252Updated 3 months ago
- Collections of Papers and Projects for Multimodal Reasoning.☆106Updated 8 months ago
- ☆116Updated 2 months ago
- https://huggingface.co/datasets/multimodal-reasoning-lab/Zebra-CoT☆112Updated 2 months ago
- Latest Advances on Embodied Multimodal LLMs (or Vison-Language-Action Models).☆122Updated last year
- Official Code for "Mini-o3: Scaling Up Reasoning Patterns and Interaction Turns for Visual Search"☆383Updated 3 months ago
- [NeurIPS'24] This repository is the implementation of "SpatialRGPT: Grounded Spatial Reasoning in Vision Language Models"☆303Updated last year
- Official implementation of Spatial-MLLM: Boosting MLLM Capabilities in Visual-based Spatial Intelligence☆419Updated last month
- Thinking with Videos from Open-Source Priors. We reproduce chain-of-frames visual reasoning by fine-tuning open-source video models. Give…☆202Updated 2 months ago
- This repository collects papers on VLLM applications. We will update new papers irregularly.☆195Updated 3 weeks ago
- [NeurIPS 2025] Official implementation of "RoboRefer: Towards Spatial Referring with Reasoning in Vision-Language Models for Robotics"☆218Updated 3 weeks ago
- Visual Planning: Let's Think Only with Images☆290Updated 7 months ago
- Uni-CoT: Towards Unified Chain-of-Thought Reasoning Across Text and Vision☆192Updated 2 weeks ago
- 📖 This is a repository for organizing papers, codes, and other resources related to unified multimodal models.☆339Updated 2 months ago
- Ego-R1: Chain-of-Tool-Thought for Ultra-Long Egocentric Video Reasoning☆134Updated 4 months ago
- Thinking in 360°: Humanoid Visual Search in the Wild☆105Updated last month