EvolvingLMMs-Lab / MGPOLinks
High-Resolution Visual Reasoning via Multi-Turn Grounding-Based Reinforcement Learning
☆44Updated 2 weeks ago
Alternatives and similar repositories for MGPO
Users that are interested in MGPO are comparing it to the libraries listed below
Sorting:
- Task Preference Optimization: Improving Multimodal Large Language Models with Vision Task Alignment☆53Updated 2 weeks ago
- ☆87Updated last month
- ☆52Updated last month
- ☆37Updated 2 months ago
- ☆45Updated 7 months ago
- Official implementation of "Traceable Evidence Enhanced Visual Grounded Reasoning: Evaluation and Methodology"☆49Updated 3 weeks ago
- SophiaVL-R1: Reinforcing MLLMs Reasoning with Thinking Reward☆72Updated last month
- [NeurIPS 2024] Efficient Large Multi-modal Models via Visual Context Compression☆60Updated 5 months ago
- [ICCV 2025] Dynamic-VLM☆23Updated 7 months ago
- Repo for paper "T2Vid: Translating Long Text into Multi-Image is the Catalyst for Video-LLMs"☆49Updated 4 months ago
- [NeurIPS 2024] TransAgent: Transfer Vision-Language Foundation Models with Heterogeneous Agent Collaboration☆24Updated 9 months ago
- The code for "VISTA: Enhancing Long-Duration and High-Resolution Video Understanding by VIdeo SpatioTemporal Augmentation" [CVPR2025]☆19Updated 5 months ago
- Video-Holmes: Can MLLM Think Like Holmes for Complex Video Reasoning?☆62Updated 3 weeks ago
- ☆66Updated last month
- MM-Instruct: Generated Visual Instructions for Large Multimodal Model Alignment☆35Updated last year
- ZoomEye: Enhancing Multimodal LLMs with Human-Like Zooming Capabilities through Tree-Based Image Exploration☆47Updated 7 months ago
- [ICCV 2025] Official Repository of VideoLLaMB: Long Video Understanding with Recurrent Memory Bridges☆71Updated 5 months ago
- [NeurIPS-24] This is the official implementation of the paper "DeepStack: Deeply Stacking Visual Tokens is Surprisingly Simple and Effect…☆38Updated last year
- Official code for paper "GRIT: Teaching MLLMs to Think with Images"☆114Updated this week
- This repo contains evaluation code for the paper "AV-Odyssey: Can Your Multimodal LLMs Really Understand Audio-Visual Information?"☆26Updated 7 months ago
- Official repository of "Inst-IT: Boosting Multimodal Instance Understanding via Explicit Visual Prompt Instruction Tuning"☆35Updated 5 months ago
- [ICLR 2025] CREMA: Generalizable and Efficient Video-Language Reasoning via Multimodal Modular Fusion☆48Updated last month
- ☆62Updated this week
- [arXiv: 2502.05178] QLIP: Text-Aligned Visual Tokenization Unifies Auto-Regressive Multimodal Understanding and Generation☆76Updated 5 months ago
- Official implement of MIA-DPO☆62Updated 6 months ago
- ☆51Updated 3 weeks ago
- [EMNLP 2024] Official code for "Beyond Embeddings: The Promise of Visual Table in Multi-Modal Models"☆20Updated 9 months ago
- ☆23Updated last month
- ☆41Updated last month
- Ego-R1: Chain-of-Tool-Thought for Ultra-Long Egocentric Video Reasoning☆97Updated last month