williamium3000 / awesome-mllm-groundingLinks
Awesome paper for multi-modal llm with grounding ability
☆19Updated last month
Alternatives and similar repositories for awesome-mllm-grounding
Users that are interested in awesome-mllm-grounding are comparing it to the libraries listed below
Sorting:
- Latest open-source "Thinking with images" (O3/O4-mini) papers, covering training-free, SFT-based, and RL-enhanced methods for "fine-grain…☆98Updated 3 months ago
- [CVPR 2025 Oral] VideoEspresso: A Large-Scale Chain-of-Thought Dataset for Fine-Grained Video Reasoning via Core Frame Selection☆127Updated 3 months ago
- ☆126Updated 8 months ago
- [ECCV 2024] ControlCap: Controllable Region-level Captioning☆79Updated last year
- VoCoT: Unleashing Visually Grounded Multi-Step Reasoning in Large Multi-Modal Models☆75Updated last year
- [ICLR'25] Reconstructive Visual Instruction Tuning☆125Updated 7 months ago
- Evaluation code for Ref-L4, a new REC benchmark in the LMM era☆51Updated 10 months ago
- [ICLR2025] Draw-and-Understand: Leveraging Visual Prompts to Enable MLLMs to Comprehend What You Want☆91Updated 5 months ago
- Pink: Unveiling the Power of Referential Comprehension for Multi-modal LLMs☆95Updated 10 months ago
- ☆46Updated 10 months ago
- Official repository of DoraemonGPT: Toward Understanding Dynamic Scenes with Large Language Models☆88Updated last year
- Official code for NeurIPS 2025 paper "GRIT: Teaching MLLMs to Think with Images"☆162Updated last month
- ☆99Updated 10 months ago
- Mitigating Shortcuts in Visual Reasoning with Reinforcement Learning☆41Updated 4 months ago
- Beyond Hallucinations: Enhancing LVLMs through Hallucination-Aware Direct Preference Optimization☆98Updated last year
- Repository of paper: Position-Enhanced Visual Instruction Tuning for Multimodal Large Language Models☆37Updated 2 years ago
- VCR-Bench: A Comprehensive Evaluation Framework for Video Chain-of-Thought Reasoning☆32Updated 4 months ago
- MME-CoT: Benchmarking Chain-of-Thought in LMMs for Reasoning Quality, Robustness, and Efficiency☆133Updated 3 months ago
- Official code for paper: [CLS] Attention is All You Need for Training-Free Visual Token Pruning: Make VLM Inference Faster.☆97Updated 4 months ago
- [NeurIPS 2025] MINT-CoT: Enabling Interleaved Visual Tokens in Mathematical Chain-of-Thought Reasoning☆87Updated 2 months ago
- [CVPR'2025] VoCo-LLaMA: This repo is the official implementation of "VoCo-LLaMA: Towards Vision Compression with Large Language Models".☆194Updated 5 months ago
- [CVPR 2025] LLaVA-ST: A Multimodal Large Language Model for Fine-Grained Spatial-Temporal Understanding☆77Updated 4 months ago
- [NeurIPS2024] Repo for the paper `ControlMLLM: Training-Free Visual Prompt Learning for Multimodal Large Language Models'☆197Updated 4 months ago
- [NeurIPS 2024] Vision Model Pre-training on Interleaved Image-Text Data via Latent Compression Learning☆70Updated 9 months ago
- ✨✨ [ICLR 2025] MME-RealWorld: Could Your Multimodal LLM Challenge High-Resolution Real-World Scenarios that are Difficult for Humans?☆142Updated last month
- A Comprehensive Benchmark and Toolkit for Evaluating Video-based Large Language Models!☆135Updated last year
- [LLaVA-Video-R1]✨First Adaptation of R1 to LLaVA-Video (2025-03-18)☆35Updated 6 months ago
- The official implementation of RAR☆92Updated last year
- [EMNLP'23] The official GitHub page for ''Evaluating Object Hallucination in Large Vision-Language Models''☆96Updated 3 months ago
- [NeurIPS 2025] The official repository for our paper, "Open Vision Reasoner: Transferring Linguistic Cognitive Behavior for Visual Reason…☆144Updated 2 months ago