williamium3000 / awesome-mllm-groundingLinks
Awesome paper for multi-modal llm with grounding ability
☆17Updated 10 months ago
Alternatives and similar repositories for awesome-mllm-grounding
Users that are interested in awesome-mllm-grounding are comparing it to the libraries listed below
Sorting:
- Official repository of DoraemonGPT: Toward Understanding Dynamic Scenes with Large Language Models☆85Updated 9 months ago
- [ECCV 2024] ControlCap: Controllable Region-level Captioning☆75Updated 7 months ago
- Official repo for "Streaming Video Understanding and Multi-round Interaction with Memory-enhanced Knowledge" ICLR2025☆53Updated 2 months ago
- [CVPR 2025 Oral] VideoEspresso: A Large-Scale Chain-of-Thought Dataset for Fine-Grained Video Reasoning via Core Frame Selection☆82Updated last month
- [ICLR'25] Reconstructive Visual Instruction Tuning☆89Updated last month
- [CVPR 2025] Official PyTorch Implementation of GLUS: Global-Local Reasoning Unified into A Single Large Language Model for Video Segmenta…☆39Updated last month
- [CVPR 2025] Adaptive Keyframe Sampling for Long Video Understanding☆66Updated last month
- [LLaVA-Video-R1]✨First Adaptation of R1 to LLaVA-Video (2025-03-18)☆28Updated 3 weeks ago
- ☆84Updated 2 months ago
- ☆43Updated 5 months ago
- ☆69Updated 6 months ago
- FreeVA: Offline MLLM as Training-Free Video Assistant☆60Updated 11 months ago
- [NeurIPS 2024] Vision Model Pre-training on Interleaved Image-Text Data via Latent Compression Learning☆69Updated 3 months ago
- VCR-Bench: A Comprehensive Evaluation Framework for Video Chain-of-Thought Reasoning☆29Updated last month
- Official PyTorch Code of ReKV (ICLR'25)☆23Updated 2 months ago
- Can 3D Vision-Language Models Truly Understand Natural Language?☆21Updated last year
- 👾 E.T. Bench: Towards Open-Ended Event-Level Video-Language Understanding (NeurIPS 2024)☆58Updated 4 months ago
- Repository of paper: Position-Enhanced Visual Instruction Tuning for Multimodal Large Language Models☆37Updated last year
- Egocentric Video Understanding Dataset (EVUD)☆29Updated 11 months ago
- Official PyTorch code of GroundVQA (CVPR'24)☆61Updated 8 months ago
- ☆36Updated last month
- MME-Unify: A Comprehensive Benchmark for Unified Multimodal Understanding and Generation Models☆35Updated last month
- [CVPR 2025] Mono-InternVL: Pushing the Boundaries of Monolithic Multimodal Large Language Models with Endogenous Visual Pre-training☆46Updated 2 months ago
- ☆81Updated 2 months ago
- (CVPR 2025) PyramidDrop: Accelerating Your Large Vision-Language Models via Pyramid Visual Redundancy Reduction☆105Updated 3 months ago
- Latest Advances on (RL based) Multimodal Reasoning and Generation in Multimodal Large Language Models☆25Updated this week
- [NeurIPS 2024] Efficient Large Multi-modal Models via Visual Context Compression☆60Updated 3 months ago
- [CVPR 2025] OVO-Bench: How Far is Your Video-LLMs from Real-World Online Video Understanding?☆59Updated 2 months ago
- ☆91Updated last year
- 「AAAI 2024」 Referred by Multi-Modality: A Unified Temporal Transformers for Video Object Segmentation☆79Updated 11 months ago