williamium3000 / awesome-mllm-groundingLinks
Awesome paper for multi-modal llm with grounding ability
☆19Updated last year
Alternatives and similar repositories for awesome-mllm-grounding
Users that are interested in awesome-mllm-grounding are comparing it to the libraries listed below
Sorting:
- Latest open-source "Thinking with images" (O3/O4-mini) papers, covering training-free, SFT-based, and RL-enhanced methods for "fine-grain…☆89Updated last month
- [ECCV 2024] ControlCap: Controllable Region-level Captioning☆79Updated 10 months ago
- [CVPR 2025 Oral] VideoEspresso: A Large-Scale Chain-of-Thought Dataset for Fine-Grained Video Reasoning via Core Frame Selection☆117Updated last month
- ☆120Updated 6 months ago
- [ICLR2025] Draw-and-Understand: Leveraging Visual Prompts to Enable MLLMs to Comprehend What You Want☆88Updated 3 months ago
- [NeurIPS 2024] Vision Model Pre-training on Interleaved Image-Text Data via Latent Compression Learning☆70Updated 7 months ago
- Official repository of DoraemonGPT: Toward Understanding Dynamic Scenes with Large Language Models☆86Updated last year
- Evaluation code for Ref-L4, a new REC benchmark in the LMM era☆48Updated 8 months ago
- [ICLR'25] Reconstructive Visual Instruction Tuning☆116Updated 5 months ago
- VoCoT: Unleashing Visually Grounded Multi-Step Reasoning in Large Multi-Modal Models☆73Updated last year
- ✨✨ [ICLR 2025] MME-RealWorld: Could Your Multimodal LLM Challenge High-Resolution Real-World Scenarios that are Difficult for Humans?☆132Updated 6 months ago
- VCR-Bench: A Comprehensive Evaluation Framework for Video Chain-of-Thought Reasoning☆32Updated 2 months ago
- [CVPR 2025] Adaptive Keyframe Sampling for Long Video Understanding☆103Updated 3 weeks ago
- The official implementation of RAR☆93Updated last year
- ☆45Updated 8 months ago
- ☆88Updated 8 months ago
- [CVPR 2025] Mono-InternVL: Pushing the Boundaries of Monolithic Multimodal Large Language Models with Endogenous Visual Pre-training☆82Updated 2 months ago
- Official code for NeurIPS 2025 paper "GRIT: Teaching MLLMs to Think with Images"☆137Updated last month
- Pink: Unveiling the Power of Referential Comprehension for Multi-modal LLMs☆92Updated 8 months ago
- Official implementation of "Why are Visually-Grounded Language Models Bad at Image Classification?" (NeurIPS 2024)☆91Updated 11 months ago
- Official implementation of "Traceable Evidence Enhanced Visual Grounded Reasoning: Evaluation and Methodology"☆63Updated 2 months ago
- FreeVA: Offline MLLM as Training-Free Video Assistant☆63Updated last year
- [LLaVA-Video-R1]✨First Adaptation of R1 to LLaVA-Video (2025-03-18)☆32Updated 4 months ago
- [NeurIPS2024] Repo for the paper `ControlMLLM: Training-Free Visual Prompt Learning for Multimodal Large Language Models'☆192Updated 2 months ago
- The official repository for our paper, "Open Vision Reasoner: Transferring Linguistic Cognitive Behavior for Visual Reasoning".☆138Updated last week
- [ICCV 2025] VisRL: Intention-Driven Visual Perception via Reinforced Reasoning☆39Updated 3 months ago
- ☆86Updated last year
- [NeurIPS 2024] MoME: Mixture of Multimodal Experts for Generalist Multimodal Large Language Models☆72Updated 4 months ago
- [NeurIPS 2024] Efficient Large Multi-modal Models via Visual Context Compression☆62Updated 7 months ago
- Code for DeCo: Decoupling token compression from semanchc abstraction in multimodal large language models☆70Updated 2 months ago