aim-uofa / Active-o3Links
ACTIVE-O3: Empowering Multimodal Large Language Models with Active Perception via GRPO
☆72Updated 3 months ago
Alternatives and similar repositories for Active-o3
Users that are interested in Active-o3 are comparing it to the libraries listed below
Sorting:
- Visual Planning: Let's Think Only with Images☆271Updated 4 months ago
- [NeurIPS 2025] Official Repo of Omni-R1: Reinforcement Learning for Omnimodal Reasoning via Two-System Collaboration☆81Updated 3 months ago
- ☆88Updated 2 months ago
- Visual Embodied Brain: Let Multimodal Large Language Models See, Think, and Control in Spaces☆80Updated 3 months ago
- The official repository for our paper, "Open Vision Reasoner: Transferring Linguistic Cognitive Behavior for Visual Reasoning".☆138Updated last week
- ☆41Updated 3 months ago
- https://huggingface.co/datasets/multimodal-reasoning-lab/Zebra-CoT☆77Updated last month
- Machine Mental Imagery: Empower Multimodal Reasoning with Latent Visual Tokens (arXiv 2025)☆155Updated last month
- ☆50Updated 4 months ago
- Video-Holmes: Can MLLM Think Like Holmes for Complex Video Reasoning?☆74Updated 2 months ago
- 📖 This is a repository for organizing papers, codes and other resources related to Visual Reinforcement Learning.☆258Updated last week
- SpaceR: The first MLLM empowered by SG-RLVR for video spatial reasoning☆79Updated 2 months ago
- Uni-CoT: Towards Unified Chain-of-Thought Reasoning Across Text and Vision☆134Updated this week
- Reinforcing Spatial Reasoning in Vision-Language Models with Interwoven Thinking and Visual Drawing☆66Updated last month
- Official Implementation of LaViDa: :A Large Diffusion Language Model for Multimodal Understanding☆147Updated 2 months ago
- MetaSpatial leverages reinforcement learning to enhance 3D spatial reasoning in vision-language models (VLMs), enabling more structured, …☆189Updated 4 months ago
- Ego-R1: Chain-of-Tool-Thought for Ultra-Long Egocentric Video Reasoning☆117Updated last month
- ☆84Updated last month
- Official repository for "RLVR-World: Training World Models with Reinforcement Learning" (NeurIPS 2025), https://arxiv.org/abs/2505.13934☆88Updated this week
- Pixel-Level Reasoning Model trained with RL [NeuIPS25]☆210Updated last week
- High-Resolution Visual Reasoning via Multi-Turn Grounding-Based Reinforcement Learning☆48Updated last month
- Official implementation of Next Block Prediction: Video Generation via Semi-Autoregressive Modeling☆39Updated 7 months ago
- [arXiv 2025] Can MLLMs Guide Me Home? A Benchmark Study on Fine-Grained Visual Reasoning from Transit Maps☆66Updated this week
- Code and dataset link for "DenseWorld-1M: Towards Detailed Dense Grounded Caption in the Real World"☆107Updated 2 months ago
- [arXiv: 2502.05178] QLIP: Text-Aligned Visual Tokenization Unifies Auto-Regressive Multimodal Understanding and Generation☆87Updated 6 months ago
- [EMNLP-2025 Oral] ZoomEye: Enhancing Multimodal LLMs with Human-Like Zooming Capabilities through Tree-Based Image Exploration☆53Updated 3 weeks ago
- Official implementation of "Traceable Evidence Enhanced Visual Grounded Reasoning: Evaluation and Methodology"☆63Updated 2 months ago
- Implementation for "The Scalability of Simplicity: Empirical Analysis of Vision-Language Learning with a Single Transformer"☆64Updated 3 weeks ago
- Official Code for "Mini-o3: Scaling Up Reasoning Patterns and Interaction Turns for Visual Search"☆268Updated this week
- ☆30Updated 9 months ago