aim-uofa / Omni-R1Links
Official Repo of Omni-R1: Reinforcement Learning for Omnimodal Reasoning via Two-System Collaboration
☆73Updated 2 months ago
Alternatives and similar repositories for Omni-R1
Users that are interested in Omni-R1 are comparing it to the libraries listed below
Sorting:
- ACTIVE-O3: Empowering Multimodal Large Language Models with Active Perception via GRPO☆68Updated 2 months ago
- MetaSpatial leverages reinforcement learning to enhance 3D spatial reasoning in vision-language models (VLMs), enabling more structured, …☆184Updated 3 months ago
- SpaceR: The first MLLM empowered by SG-RLVR for video spatial reasoning☆71Updated last month
- Official implementation of Spatial-MLLM: Boosting MLLM Capabilities in Visual-based Spatial Intelligence☆315Updated last month
- Official implementation of "Ross3D: Reconstructive Visual Instruction Tuning with 3D-Awareness".☆47Updated 2 weeks ago
- ☆30Updated 8 months ago
- Code for MetaMorph Multimodal Understanding and Generation via Instruction Tuning☆200Updated 3 months ago
- [ICCV2025]Code Release of Harmonizing Visual Representations for Unified Multimodal Understanding and Generation☆151Updated 2 months ago
- Ego-R1: Chain-of-Tool-Thought for Ultra-Long Egocentric Video Reasoning☆97Updated last month
- [ICLR'25] Reconstructive Visual Instruction Tuning☆102Updated 4 months ago
- [CVPR 2025 (Oral)] Open implementation of "RandAR"☆186Updated 3 weeks ago
- A collection of vision foundation models unifying understanding and generation.☆57Updated 7 months ago
- Official Implementation of Paper Transfer between Modalities with MetaQueries☆198Updated 3 weeks ago
- Official respository for ReasonGen-R1☆57Updated last month
- Official repository of 'ScaleCap: Inference-Time Scalable Image Captioning via Dual-Modality Debiasing’☆52Updated last month
- ☆145Updated last month
- [CVPR 2025] OVO-Bench: How Far is Your Video-LLMs from Real-World Online Video Understanding?☆78Updated 2 weeks ago
- A paper list for spatial reasoning☆129Updated last month
- Official repository of "GoT: Unleashing Reasoning Capability of Multimodal Large Language Model for Visual Generation and Editing"☆272Updated 3 months ago
- Code release for "PISA Experiments: Exploring Physics Post-Training for Video Diffusion Models by Watching Stuff Drop" (ICML 2025)☆39Updated 3 months ago
- UniVG-R1: Reasoning Guided Universal Visual Grounding with Reinforcement Learning☆133Updated 2 months ago
- [ICML2025] The code and data of Paper: Towards World Simulator: Crafting Physical Commonsense-Based Benchmark for Video Generation☆117Updated 9 months ago
- ☆82Updated last week
- ☆99Updated 4 months ago
- [ICLR 2025] VILA-U: a Unified Foundation Model Integrating Visual Understanding and Generation☆374Updated 3 months ago
- Machine Mental Imagery: Empower Multimodal Reasoning with Latent Visual Tokens (arXiv 2025)☆120Updated last week
- Video-Holmes: Can MLLM Think Like Holmes for Complex Video Reasoning?☆63Updated 3 weeks ago
- TStar is a unified temporal search framework for long-form video question answering☆59Updated 4 months ago
- ☆87Updated last month
- WISE: A World Knowledge-Informed Semantic Evaluation for Text-to-Image Generation☆143Updated this week