yunlong10 / Awesome-LLMs-for-Video-UnderstandingView external linksLinks
π₯π₯π₯ [IEEE TCSVT] Latest Papers, Codes and Datasets on Vid-LLMs.
β3,076Dec 20, 2025Updated last month
Alternatives and similar repositories for Awesome-LLMs-for-Video-Understanding
Users that are interested in Awesome-LLMs-for-Video-Understanding are comparing it to the libraries listed below
Sorting:
- Awesome papers & datasets specifically focused on long-term videos.β352Oct 9, 2025Updated 4 months ago
- VideoLLaMA 2: Advancing Spatial-Temporal Modeling and Audio Understanding in Video-LLMsβ1,277Jan 23, 2025Updated last year
- γEMNLP 2024π₯γVideo-LLaVA: Learning United Visual Representation by Alignment Before Projectionβ3,447Dec 3, 2024Updated last year
- Latest Advances on Multimodal Large Language Modelsβ17,337Feb 7, 2026Updated last week
- β4,562Sep 14, 2025Updated 5 months ago
- [ACL 2024 π₯] Video-ChatGPT is a video conversation model capable of generating meaningful conversation about videos. It combines the capβ¦β1,490Aug 5, 2025Updated 6 months ago
- [CVPR 2024] TimeChat: A Time-sensitive Multimodal Large Language Model for Long Video Understandingβ409May 8, 2025Updated 9 months ago
- [ECCV2024] Video Foundation Models & Data for Multimodal Understandingβ2,196Dec 15, 2025Updated 2 months ago
- β¨β¨[CVPR 2025] Video-MME: The First-Ever Comprehensive Evaluation Benchmark of Multi-modal LLMs in Video Analysisβ730Dec 8, 2025Updated 2 months ago
- [EMNLP 2023 Demo] Video-LLaMA: An Instruction-tuned Audio-Visual Language Model for Video Understandingβ3,124Jun 4, 2024Updated last year
- [CVPR2024 Highlight][VideoChatGPT] ChatGPT with video understanding! And many more supported LMs such as miniGPT4, StableLM, and MOSS.β3,336Jan 18, 2025Updated last year
- [CVPR'2024 Highlight] Official PyTorch implementation of the paper "VTimeLLM: Empower LLM to Grasp Video Moments".β294Jun 13, 2024Updated last year
- [CVPR 2024] MovieChat: From Dense Token to Sparse Memory for Long Video Understandingβ684Jan 29, 2025Updated last year
- Open-source evaluation toolkit of large multi-modality models (LMMs), support 220+ LMMs, 80+ benchmarksβ3,816Updated this week
- LLaMA-VID: An Image is Worth 2 Tokens in Large Language Models (ECCV 2024)β859Jul 29, 2024Updated last year
- Official repository for the paper PLLaVAβ676Jul 28, 2024Updated last year
- Long Context Transfer from Language to Visionβ400Mar 18, 2025Updated 10 months ago
- [NeurIPS'23 Oral] Visual Instruction Tuning (LLaVA) built towards GPT-4V level capabilities and beyond.β24,446Aug 12, 2024Updated last year
- One-for-All Multimodal Evaluation Toolkit Across Text, Image, Video, and Audio Tasksβ3,635Updated this week
- Video-R1: Reinforcing Video Reasoning in MLLMs [π₯the first paper to explore R1 for video]β820Dec 14, 2025Updated 2 months ago
- VideoLLM-online: Online Video Large Language Model for Streaming Video (CVPR 2024)β639Nov 26, 2025Updated 2 months ago
- Qwen3-VL is the multimodal large language model series developed by Qwen team, Alibaba Cloud.β18,273Jan 30, 2026Updated 2 weeks ago
- Code for CVPR25 paper "VideoTree: Adaptive Tree-based Video Representation for LLM Reasoning on Long Videos"β154Jun 23, 2025Updated 7 months ago
- PG-Video-LLaVA: Pixel Grounding in Large Multimodal Video Modelsβ261Aug 5, 2025Updated 6 months ago
- [ACL 2024 Findings] "TempCompass: Do Video LLMs Really Understand Videos?", Yuanxin Liu, Shicheng Li, Yi Liu, Yuxiang Wang, Shuhuai Ren, β¦β128Apr 4, 2025Updated 10 months ago
- LAVIS - A One-stop Library for Language-Vision Intelligenceβ11,166Nov 18, 2024Updated last year
- [CVPR 2024 Oral] InternVL Family: A Pioneering Open-Source Alternative to GPT-4o. ζ₯θΏGPT-4o葨η°ηεΌζΊε€ζ¨‘ζε―Ήθ―樑εβ9,806Sep 22, 2025Updated 4 months ago
- [ICLR2026] VideoChat-Flash: Hierarchical Compression for Long-Context Video Modelingβ503Nov 18, 2025Updated 2 months ago
- This repository provides valuable reference for researchers in the field of multimodality, please start your exploratory travel in RL-basβ¦β1,350Dec 7, 2025Updated 2 months ago
- A curated list of recent diffusion models for video generation, editing, and various other applications.β5,451Feb 3, 2026Updated last week
- VILA is a family of state-of-the-art vision language models (VLMs) for diverse multimodal AI tasks across the edge, data center, and clouβ¦β3,754Nov 28, 2025Updated 2 months ago
- [ECCV 2024π₯] Official implementation of the paper "ST-LLM: Large Language Models Are Effective Temporal Learners"β150Sep 10, 2024Updated last year
- π A curated list of resources dedicated to hallucination of multimodal large language models (MLLM).β979Sep 27, 2025Updated 4 months ago
- γICLR 2024π₯γ Extending Video-Language Pretraining to N-modality by Language-based Semantic Alignmentβ866Mar 25, 2024Updated last year
- Video datasetsβ1,607Mar 8, 2023Updated 2 years ago
- Solve Visual Understanding with Reinforced VLMsβ5,841Oct 21, 2025Updated 3 months ago
- A fork to add multimodal model training to open-r1β1,474Feb 8, 2025Updated last year
- [AAAI 2025] VTG-LLM: Integrating Timestamp Knowledge into Video LLMs for Enhanced Video Temporal Groundingβ125Dec 10, 2024Updated last year
- Official code for Goldfish model for long video understanding and MiniGPT4-video for short video understandingβ640Dec 10, 2024Updated last year