aim-uofa / Active-o3Links
ACTIVE-O3: Empowering Multimodal Large Language Models with Active Perception via GRPO
☆70Updated 3 months ago
Alternatives and similar repositories for Active-o3
Users that are interested in Active-o3 are comparing it to the libraries listed below
Sorting:
- Official Repo of Omni-R1: Reinforcement Learning for Omnimodal Reasoning via Two-System Collaboration☆77Updated 2 months ago
- ☆87Updated 2 months ago
- Machine Mental Imagery: Empower Multimodal Reasoning with Latent Visual Tokens (arXiv 2025)☆145Updated 3 weeks ago
- ☆48Updated 3 months ago
- Visual Planning: Let's Think Only with Images☆269Updated 3 months ago
- Video-Holmes: Can MLLM Think Like Holmes for Complex Video Reasoning?☆68Updated last month
- Pixel-Level Reasoning Model trained with RL☆197Updated 2 months ago
- Ego-R1: Chain-of-Tool-Thought for Ultra-Long Egocentric Video Reasoning☆109Updated last week
- SpaceR: The first MLLM empowered by SG-RLVR for video spatial reasoning☆76Updated last month
- ☆41Updated 2 months ago
- Uni-CoT: Towards Unified Chain-of-Thought Reasoning Across Text and Vision☆96Updated 3 weeks ago
- [arXiv: 2502.05178] QLIP: Text-Aligned Visual Tokenization Unifies Auto-Regressive Multimodal Understanding and Generation☆83Updated 6 months ago
- https://huggingface.co/datasets/multimodal-reasoning-lab/Zebra-CoT☆72Updated 3 weeks ago
- The official repository for our paper, "Open Vision Reasoner: Transferring Linguistic Cognitive Behavior for Visual Reasoning".☆136Updated last month
- Code and dataset link for "DenseWorld-1M: Towards Detailed Dense Grounded Caption in the Real World"☆100Updated last month
- ☆30Updated 8 months ago
- Reinforcing Spatial Reasoning in Vision-Language Models with Interwoven Thinking and Visual Drawing☆64Updated last month
- Official implementation of "Traceable Evidence Enhanced Visual Grounded Reasoning: Evaluation and Methodology"☆60Updated last month
- Autoregressive Semantic Visual Reconstruction Helps VLMs Understand Better☆37Updated 2 months ago
- ☆53Updated last month
- Task Preference Optimization: Improving Multimodal Large Language Models with Vision Task Alignment☆55Updated last month
- MetaSpatial leverages reinforcement learning to enhance 3D spatial reasoning in vision-language models (VLMs), enabling more structured, …☆187Updated 3 months ago
- Official implementation of Next Block Prediction: Video Generation via Semi-Autoregressive Modeling☆38Updated 6 months ago
- 📖 This is a repository for organizing papers, codes and other resources related to Visual Reinforcement Learning.☆213Updated this week
- Multi-SpatialMLLM Multi-Frame Spatial Understanding with Multi-Modal Large Language Models☆147Updated 3 months ago
- High-Resolution Visual Reasoning via Multi-Turn Grounding-Based Reinforcement Learning☆47Updated last month
- ☆38Updated last month
- Official implementation of Spatial-MLLM: Boosting MLLM Capabilities in Visual-based Spatial Intelligence☆328Updated 2 months ago
- [ICCV 2025] GroundingSuite: Measuring Complex Multi-Granular Pixel Grounding☆66Updated 2 months ago
- Official code for paper "GRIT: Teaching MLLMs to Think with Images"☆121Updated 3 weeks ago