williamium3000 / awesome-mllm-grounding
Awesome paper for multi-modal llm with grounding ability
☆17Updated 9 months ago
Alternatives and similar repositories for awesome-mllm-grounding
Users that are interested in awesome-mllm-grounding are comparing it to the libraries listed below
Sorting:
- Official repository of DoraemonGPT: Toward Understanding Dynamic Scenes with Large Language Models☆84Updated 8 months ago
- [ECCV 2024] ControlCap: Controllable Region-level Captioning☆75Updated 6 months ago
- [LLaVA-Video-R1]✨First Adaptation of R1 to LLaVA-Video (2025-03-18)☆28Updated last week
- [CVPR 2025] Official PyTorch Implementation of GLUS: Global-Local Reasoning Unified into A Single Large Language Model for Video Segmenta…☆36Updated 3 weeks ago
- [CVPR 2025 Oral] VideoEspresso: A Large-Scale Chain-of-Thought Dataset for Fine-Grained Video Reasoning via Core Frame Selection☆75Updated last month
- Official PyTorch code of GroundVQA (CVPR'24)☆61Updated 8 months ago
- [CVPR 2025] LLaVA-ST: A Multimodal Large Language Model for Fine-Grained Spatial-Temporal Understanding☆42Updated 2 months ago
- Official repo for "Streaming Video Understanding and Multi-round Interaction with Memory-enhanced Knowledge" ICLR2025☆50Updated 2 months ago
- [ICLR'25] Reconstructive Visual Instruction Tuning☆83Updated last month
- [CVPR 2025] Adaptive Keyframe Sampling for Long Video Understanding☆60Updated 3 weeks ago
- ☆83Updated last month
- FreeVA: Offline MLLM as Training-Free Video Assistant☆61Updated 11 months ago
- 👾 E.T. Bench: Towards Open-Ended Event-Level Video-Language Understanding (NeurIPS 2024)☆58Updated 3 months ago
- [ICLR 2025] TimeSuite: Improving MLLMs for Long Video Understanding via Grounded Tuning☆33Updated last month
- [CVPR 2025] OVO-Bench: How Far is Your Video-LLMs from Real-World Online Video Understanding?☆56Updated last month
- Egocentric Video Understanding Dataset (EVUD)☆29Updated 10 months ago
- (NeurIPS 2024 Spotlight) TOPA: Extend Large Language Models for Video Understanding via Text-Only Pre-Alignment☆30Updated 7 months ago
- Envolving Temporal Reasoning Capability into LMMs via Temporal Consistent Reward☆35Updated last month
- Repository of paper: Position-Enhanced Visual Instruction Tuning for Multimodal Large Language Models☆37Updated last year
- ☆58Updated last year
- [NeurIPS 2024] Official Repository of Multi-Object Hallucination in Vision-Language Models☆29Updated 6 months ago
- 【NeurIPS 2024】The official code of paper "Automated Multi-level Preference for MLLMs"☆19Updated 7 months ago
- [ECCV24] VISA: Reasoning Video Object Segmentation via Large Language Model☆16Updated 9 months ago
- VideoHallucer, The first comprehensive benchmark for hallucination detection in large video-language models (LVLMs)☆30Updated last month
- Can 3D Vision-Language Models Truly Understand Natural Language?☆21Updated last year
- VoCoT: Unleashing Visually Grounded Multi-Step Reasoning in Large Multi-Modal Models☆56Updated 10 months ago
- ✨A curated list of papers on the uncertainty in multi-modal large language model (MLLM).☆44Updated last month
- VCR-Bench: A Comprehensive Evaluation Framework for Video Chain-of-Thought Reasoning☆26Updated last month
- [EMNLP'23] The official GitHub page for ''Evaluating Object Hallucination in Large Vision-Language Models''☆83Updated last year
- Official code for "AIM: Adaptive Inference of Multi-Modal LLMs via Token Merging and Pruning"☆26Updated this week