AMAP-ML / UniVG-R1Links
UniVG-R1: Reasoning Guided Universal Visual Grounding with Reinforcement Learning
☆133Updated 2 months ago
Alternatives and similar repositories for UniVG-R1
Users that are interested in UniVG-R1 are comparing it to the libraries listed below
Sorting:
- Ego-R1: Chain-of-Tool-Thought for Ultra-Long Egocentric Video Reasoning☆101Updated last month
- Structured Video Comprehension of Real-World Shorts☆152Updated this week
- [NeurlPS 2024] One Token to Seg Them All: Language Instructed Reasoning Segmentation in Videos☆128Updated 7 months ago
- VideoChat-R1: Enhancing Spatio-Temporal Perception via Reinforcement Fine-Tuning☆169Updated 2 months ago
- [ICCV2025]Code Release of Harmonizing Visual Representations for Unified Multimodal Understanding and Generation☆151Updated 2 months ago
- ICML 2025 - Impossible Videos☆73Updated 2 weeks ago
- ☆145Updated last month
- GoT-R1: Unleashing Reasoning Capability of MLLM for Visual Generation with Reinforcement Learning☆94Updated 2 months ago
- Official repository of "GoT: Unleashing Reasoning Capability of Multimodal Large Language Model for Visual Generation and Editing"☆272Updated 3 months ago
- Empowering Unified MLLM with Multi-granular Visual Generation☆129Updated 6 months ago
- Official Repository of paper: Envisioning Beyond the Pixels: Benchmarking Reasoning-Informed Visual Editing☆79Updated 3 weeks ago
- [ICLR 2025] AuroraCap: Efficient, Performant Video Detailed Captioning and a New Benchmark☆121Updated 2 months ago
- ICML2025☆52Updated this week
- ☆87Updated last month
- Official respository for ReasonGen-R1☆62Updated last month
- Pixel-Level Reasoning Model trained with RL☆187Updated last month
- Video-Holmes: Can MLLM Think Like Holmes for Complex Video Reasoning?☆63Updated 3 weeks ago
- Vision as a Dialect: Unifying Visual Understanding and Generation via Text-Aligned Representations☆131Updated last month
- TokLIP: Marry Visual Tokens to CLIP for Multimodal Comprehension and Generation☆103Updated 2 months ago
- [CVPR 2025] OVO-Bench: How Far is Your Video-LLMs from Real-World Online Video Understanding?☆78Updated 2 weeks ago
- ☆107Updated last month
- Grounded-VideoLLM: Sharpening Fine-grained Temporal Grounding in Video Large Language Models☆119Updated 4 months ago
- WISE: A World Knowledge-Informed Semantic Evaluation for Text-to-Image Generation☆143Updated last week
- Code and dataset link for "DenseWorld-1M: Towards Detailed Dense Grounded Caption in the Real World"☆96Updated last month
- Official repo for "Streaming Video Understanding and Multi-round Interaction with Memory-enhanced Knowledge" ICLR2025☆64Updated 4 months ago
- ☆35Updated last month
- [CVPR2025] Number it: Temporal Grounding Videos like Flipping Manga☆112Updated 4 months ago
- ☆36Updated last month
- [ECCV 2024] Elysium: Exploring Object-level Perception in Videos via MLLM☆80Updated 9 months ago
- [CVPR 2025]Dispider: Enabling Video LLMs with Active Real-Time Interaction via Disentangled Perception, Decision, and Reaction☆126Updated 4 months ago