williamium3000 / awesome-mllm-groundingLinks
Awesome paper for multi-modal llm with grounding ability
☆19Updated last week
Alternatives and similar repositories for awesome-mllm-grounding
Users that are interested in awesome-mllm-grounding are comparing it to the libraries listed below
Sorting:
- Evaluation code for Ref-L4, a new REC benchmark in the LMM era☆49Updated 9 months ago
- [ECCV 2024] ControlCap: Controllable Region-level Captioning☆79Updated 11 months ago
- Official implementation of "Why are Visually-Grounded Language Models Bad at Image Classification?" (NeurIPS 2024)☆91Updated 11 months ago
- [ICLR2025] Draw-and-Understand: Leveraging Visual Prompts to Enable MLLMs to Comprehend What You Want☆88Updated 4 months ago
- Official code for NeurIPS 2025 paper "GRIT: Teaching MLLMs to Think with Images"☆145Updated 2 months ago
- ☆122Updated 6 months ago
- VoCoT: Unleashing Visually Grounded Multi-Step Reasoning in Large Multi-Modal Models☆74Updated last year
- Latest open-source "Thinking with images" (O3/O4-mini) papers, covering training-free, SFT-based, and RL-enhanced methods for "fine-grain…☆94Updated last month
- Official repository of DoraemonGPT: Toward Understanding Dynamic Scenes with Large Language Models☆86Updated last year
- Official implementation of "Traceable Evidence Enhanced Visual Grounded Reasoning: Evaluation and Methodology"☆64Updated 2 months ago
- Emerging Pixel Grounding in Large Multimodal Models Without Grounding Supervision☆40Updated 6 months ago
- [ICLR'25] Reconstructive Visual Instruction Tuning☆119Updated 6 months ago
- [CVPR 2025 Oral] VideoEspresso: A Large-Scale Chain-of-Thought Dataset for Fine-Grained Video Reasoning via Core Frame Selection☆118Updated 2 months ago
- [NeurIPS2024] Repo for the paper `ControlMLLM: Training-Free Visual Prompt Learning for Multimodal Large Language Models'☆193Updated 2 months ago
- Reinforcement Learning Tuning for VideoLLMs: Reward Design and Data Efficiency☆54Updated 4 months ago
- ☆92Updated 9 months ago
- Mitigating Shortcuts in Visual Reasoning with Reinforcement Learning☆37Updated 3 months ago
- FreeVA: Offline MLLM as Training-Free Video Assistant☆63Updated last year
- ✨✨ [ICLR 2025] MME-RealWorld: Could Your Multimodal LLM Challenge High-Resolution Real-World Scenarios that are Difficult for Humans?☆133Updated 7 months ago
- MME-CoT: Benchmarking Chain-of-Thought in LMMs for Reasoning Quality, Robustness, and Efficiency☆130Updated 2 months ago
- A collection of visual instruction tuning datasets.☆76Updated last year
- [CVPR 2025] Adaptive Keyframe Sampling for Long Video Understanding☆108Updated last month
- The official implementation of RAR☆92Updated last year
- [NeurIPS 2024] Vision Model Pre-training on Interleaved Image-Text Data via Latent Compression Learning☆70Updated 8 months ago
- [EMNLP'23] The official GitHub page for ''Evaluating Object Hallucination in Large Vision-Language Models''☆93Updated last month
- Beyond Hallucinations: Enhancing LVLMs through Hallucination-Aware Direct Preference Optimization☆94Updated last year
- [CVPR2025] Code Release of F-LMM: Grounding Frozen Large Multimodal Models☆104Updated 4 months ago
- Code for DeCo: Decoupling token compression from semanchc abstraction in multimodal large language models☆71Updated 2 months ago
- [NeurIPS 2025 Spotlight] Think or Not Think: A Study of Explicit Thinking in Rule-Based Visual Reinforcement Fine-Tuning☆67Updated 3 weeks ago
- [CVPR'2025] VoCo-LLaMA: This repo is the official implementation of "VoCo-LLaMA: Towards Vision Compression with Large Language Models".☆189Updated 3 months ago