aim-uofa / Active-o3Links
ACTIVE-O3: Empowering Multimodal Large Language Models with Active Perception via GRPO
☆75Updated 3 weeks ago
Alternatives and similar repositories for Active-o3
Users that are interested in Active-o3 are comparing it to the libraries listed below
Sorting:
- https://huggingface.co/datasets/multimodal-reasoning-lab/Zebra-CoT☆104Updated last month
- [NeurIPS 2025] Official Repo of Omni-R1: Reinforcement Learning for Omnimodal Reasoning via Two-System Collaboration☆96Updated last week
- Visual Planning: Let's Think Only with Images☆284Updated 6 months ago
- ☆41Updated 6 months ago
- ☆95Updated 5 months ago
- Cambrian-S: Towards Spatial Supersensing in Video☆407Updated last month
- Reinforcing Spatial Reasoning in Vision-Language Models with Interwoven Thinking and Visual Drawing☆82Updated 4 months ago
- Official implementation of "Open-o3 Video: Grounded Video Reasoning with Explicit Spatio-Temporal Evidence"☆120Updated last month
- Video-Holmes: Can MLLM Think Like Holmes for Complex Video Reasoning?☆80Updated 4 months ago
- Thinking with Videos from Open-Source Priors. We reproduce chain-of-frames visual reasoning by fine-tuning open-source video models. Give…☆186Updated 2 months ago
- MetaSpatial leverages reinforcement learning to enhance 3D spatial reasoning in vision-language models (VLMs), enabling more structured, …☆195Updated 7 months ago
- ☆54Updated 6 months ago
- Visual Spatial Tuning☆152Updated last week
- [NeurIPS 2025] The official repository for our paper, "Open Vision Reasoner: Transferring Linguistic Cognitive Behavior for Visual Reason…☆150Updated 2 months ago
- Machine Mental Imagery: Empower Multimodal Reasoning with Latent Visual Tokens (arXiv 2025)☆205Updated 4 months ago
- SpaceR: The first MLLM empowered by SG-RLVR for video spatial reasoning☆99Updated 5 months ago
- ☆64Updated 5 months ago
- ☆28Updated 3 weeks ago
- ☆108Updated 4 months ago
- Multi-SpatialMLLM Multi-Frame Spatial Understanding with Multi-Modal Large Language Models☆163Updated 2 months ago
- Official Repo of From Masks to Worlds: A Hitchhiker’s Guide to World Models.☆58Updated last month
- Official implementation of "Traceable Evidence Enhanced Visual Grounded Reasoning: Evaluation and Methodology"☆71Updated last month
- Uni-CoT: Towards Unified Chain-of-Thought Reasoning Across Text and Vision☆177Updated 2 weeks ago
- ☆63Updated last month
- [arXiv: 2502.05178] QLIP: Text-Aligned Visual Tokenization Unifies Auto-Regressive Multimodal Understanding and Generation☆94Updated 9 months ago
- Pixel-Level Reasoning Model trained with RL [NeuIPS25]☆254Updated last month
- STI-Bench : Are MLLMs Ready for Precise Spatial-Temporal World Understanding?☆33Updated 5 months ago
- 📖 This is a repository for organizing papers, codes and other resources related to Visual Reinforcement Learning.☆354Updated last week
- Visual Embodied Brain: Let Multimodal Large Language Models See, Think, and Control in Spaces☆87Updated 6 months ago
- Code and dataset link for "DenseWorld-1M: Towards Detailed Dense Grounded Caption in the Real World"☆116Updated 2 months ago