aim-uofa / Active-o3Links
ACTIVE-O3: Empowering Multimodal Large Language Models with Active Perception via GRPO
☆72Updated 4 months ago
Alternatives and similar repositories for Active-o3
Users that are interested in Active-o3 are comparing it to the libraries listed below
Sorting:
- ☆41Updated 4 months ago
- ☆90Updated 3 months ago
- [NeurIPS 2025] Official Repo of Omni-R1: Reinforcement Learning for Omnimodal Reasoning via Two-System Collaboration☆83Updated 4 months ago
- ☆50Updated 4 months ago
- SpaceR: The first MLLM empowered by SG-RLVR for video spatial reasoning☆81Updated 3 months ago
- Visual Planning: Let's Think Only with Images☆271Updated 4 months ago
- https://huggingface.co/datasets/multimodal-reasoning-lab/Zebra-CoT☆79Updated 2 months ago
- Visual Embodied Brain: Let Multimodal Large Language Models See, Think, and Control in Spaces☆81Updated 4 months ago
- Machine Mental Imagery: Empower Multimodal Reasoning with Latent Visual Tokens (arXiv 2025)☆168Updated 2 months ago
- Official implementation of "Traceable Evidence Enhanced Visual Grounded Reasoning: Evaluation and Methodology"☆64Updated 2 months ago
- The official repository for our paper, "Open Vision Reasoner: Transferring Linguistic Cognitive Behavior for Visual Reasoning".☆139Updated 3 weeks ago
- ☆88Updated 2 months ago
- Code and dataset link for "DenseWorld-1M: Towards Detailed Dense Grounded Caption in the Real World"☆109Updated last week
- Uni-CoT: Towards Unified Chain-of-Thought Reasoning Across Text and Vision☆150Updated 2 weeks ago
- Video-Holmes: Can MLLM Think Like Holmes for Complex Video Reasoning?☆73Updated 2 months ago
- Ego-R1: Chain-of-Tool-Thought for Ultra-Long Egocentric Video Reasoning☆122Updated last month
- Official Implementation of LaViDa: :A Large Diffusion Language Model for Multimodal Understanding☆157Updated 2 months ago
- Pixel-Level Reasoning Model trained with RL [NeuIPS25]☆232Updated last month
- [ICLR'25] Reconstructive Visual Instruction Tuning☆119Updated 6 months ago
- ☆86Updated last week
- ☆246Updated 3 weeks ago
- LEO: A powerful Hybrid Multimodal LLM☆18Updated 8 months ago
- [arXiv: 2502.05178] QLIP: Text-Aligned Visual Tokenization Unifies Auto-Regressive Multimodal Understanding and Generation☆87Updated 7 months ago
- [arXiv 2025] Can MLLMs Guide Me Home? A Benchmark Study on Fine-Grained Visual Reasoning from Transit Maps☆67Updated last week
- High-Resolution Visual Reasoning via Multi-Turn Grounding-Based Reinforcement Learning☆48Updated 2 months ago
- ☆122Updated 6 months ago
- ☆45Updated last week
- VLA-RFT: Vision-Language-Action Models with Reinforcement Fine-Tuning☆45Updated this week
- Task Preference Optimization: Improving Multimodal Large Language Models with Vision Task Alignment☆60Updated 2 months ago
- Reinforcing Spatial Reasoning in Vision-Language Models with Interwoven Thinking and Visual Drawing☆71Updated 2 months ago