weijiawu / Awesome-RL-for-Multimodal-Foundation-ModelsLinks
📖 This is a repository for organizing papers, codes and other resources related to Visual Reinforcement Learning.
☆406Updated this week
Alternatives and similar repositories for Awesome-RL-for-Multimodal-Foundation-Models
Users that are interested in Awesome-RL-for-Multimodal-Foundation-Models are comparing it to the libraries listed below
Sorting:
- A paper list for spatial reasoning☆631Updated 2 weeks ago
- Cambrian-S: Towards Spatial Supersensing in Video☆482Updated last month
- MetaSpatial leverages reinforcement learning to enhance 3D spatial reasoning in vision-language models (VLMs), enabling more structured, …☆203Updated 8 months ago
- Official repo and evaluation implementation of VSI-Bench☆668Updated 5 months ago
- Machine Mental Imagery: Empower Multimodal Reasoning with Latent Visual Tokens (arXiv 2025)☆238Updated 6 months ago
- [NeurIPS 2025] Official Repo of Omni-R1: Reinforcement Learning for Omnimodal Reasoning via Two-System Collaboration☆113Updated 2 months ago
- [ICLR 2025] VILA-U: a Unified Foundation Model Integrating Visual Understanding and Generation☆417Updated 9 months ago
- Pixel-Level Reasoning Model trained with RL [NeuIPS25]☆269Updated 2 months ago
- Visual Planning: Let's Think Only with Images☆294Updated 8 months ago
- Latest Advances on Embodied Multimodal LLMs (or Vison-Language-Action Models).☆121Updated last year
- Official repository of "GoT: Unleashing Reasoning Capability of Multimodal Large Language Model for Visual Generation and Editing"☆305Updated 4 months ago
- [NeurIPS 2025]⭐️ Reason-RFT: Reinforcement Fine-Tuning for Visual Reasoning.☆267Updated 4 months ago
- Official implementation of Spatial-MLLM: Boosting MLLM Capabilities in Visual-based Spatial Intelligence☆426Updated 2 weeks ago
- ☆116Updated 6 months ago
- SpaceR: The first MLLM empowered by SG-RLVR for video spatial reasoning☆103Updated 6 months ago
- https://huggingface.co/datasets/multimodal-reasoning-lab/Zebra-CoT☆117Updated this week
- Holistic Evaluation of Multimodal LLMs on Spatial Intelligence☆77Updated 2 weeks ago
- Official Code for "Mini-o3: Scaling Up Reasoning Patterns and Interaction Turns for Visual Search"☆395Updated last week
- 📖 This is a repository for organizing papers, codes, and other resources related to unified multimodal models.☆348Updated 3 weeks ago
- ☆311Updated last month
- Official release of "Spatial-SSRL: Enhancing Spatial Understanding via Self-Supervised Reinforcement Learning"☆108Updated last month
- [CVPR 2025] EgoLife: Towards Egocentric Life Assistant☆382Updated 10 months ago
- A list of works on video generation towards world model☆334Updated this week
- This repository collects papers on VLLM applications. We will update new papers irregularly.☆201Updated last month
- Vision Manus: Your versatile Visual AI assistant☆317Updated last week
- PyTorch implementation of NEPA☆303Updated last week
- This is a collection of recent papers on reasoning in video generation models.☆95Updated 3 weeks ago
- ACTIVE-O3: Empowering Multimodal Large Language Models with Active Perception via GRPO☆77Updated 2 months ago
- Long-RL: Scaling RL to Long Sequences (NeurIPS 2025)☆690Updated 4 months ago
- Code for MetaMorph Multimodal Understanding and Generation via Instruction Tuning☆232Updated last week