ttengwang / Awesome_Long_Form_Video_UnderstandingLinks
Awesome papers & datasets specifically focused on long-term videos.
☆280Updated 7 months ago
Alternatives and similar repositories for Awesome_Long_Form_Video_Understanding
Users that are interested in Awesome_Long_Form_Video_Understanding are comparing it to the libraries listed below
Sorting:
- [CVPR'2024 Highlight] Official PyTorch implementation of the paper "VTimeLLM: Empower LLM to Grasp Video Moments".☆280Updated last year
- [CVPR 2024] TimeChat: A Time-sensitive Multimodal Large Language Model for Long Video Understanding☆378Updated last month
- [Neurips'24 Spotlight] Visual CoT: Advancing Multi-Modal Language Models with a Comprehensive Dataset and Benchmark for Chain-of-Thought …☆331Updated 6 months ago
- R1-like Video-LLM for Temporal Grounding☆101Updated this week
- Official Implementation of "Chrono: A Simple Blueprint for Representing Time in MLLMs"☆88Updated 3 months ago
- 🔥🔥MLVU: Multi-task Long Video Understanding Benchmark☆207Updated last week
- 🔥CVPR 2025 Multimodal Large Language Models Paper List☆144Updated 3 months ago
- ☆338Updated last year
- [ICLR 2025] TRACE: Temporal Grounding Video LLM via Casual Event Modeling☆99Updated 5 months ago
- [CVPR 2024 Highlight] Mitigating Object Hallucinations in Large Vision-Language Models through Visual Contrastive Decoding☆285Updated 8 months ago
- VideoChat-Flash: Hierarchical Compression for Long-Context Video Modeling☆438Updated last week
- [CVPR2024] ViP-LLaVA: Making Large Multimodal Models Understand Arbitrary Visual Prompts☆324Updated 11 months ago
- [ICCV2023 Oral] Unmasked Teacher: Towards Training-Efficient Video Foundation Models☆330Updated last year
- [NeurIPS2024] Repo for the paper `ControlMLLM: Training-Free Visual Prompt Learning for Multimodal Large Language Models'☆179Updated 3 weeks ago
- Foundation Models for Video Understanding: A Survey☆123Updated 9 months ago
- [NeurIPS 2022 Spotlight] Expectation-Maximization Contrastive Learning for Compact Video-and-Language Representations☆136Updated last year
- [NeurIPS 2023] Self-Chained Image-Language Model for Video Localization and Question Answering☆185Updated last year
- ✨First Open-Source R1-like Video-LLM [2025/02/18]☆348Updated 4 months ago
- Code for CVPR25 paper "VideoTree: Adaptive Tree-based Video Representation for LLM Reasoning on Long Videos"☆119Updated 3 months ago
- up-to-date curated list of state-of-the-art Large vision language models hallucinations research work, papers & resources☆138Updated last month
- A Versatile Video-LLM for Long and Short Video Understanding with Superior Temporal Localization Ability☆94Updated 6 months ago
- Official repository for VisionZip (CVPR 2025)☆305Updated last month
- Grounded-VideoLLM: Sharpening Fine-grained Temporal Grounding in Video Large Language Models☆116Updated 3 months ago
- [ICLR'25] Official code for the paper 'MLLMs Know Where to Look: Training-free Perception of Small Visual Details with Multimodal LLMs'☆217Updated 2 months ago
- [AAAI 2025] VTG-LLM: Integrating Timestamp Knowledge into Video LLMs for Enhanced Video Temporal Grounding☆103Updated 6 months ago
- (2024CVPR) MA-LMM: Memory-Augmented Large Multimodal Model for Long-Term Video Understanding☆314Updated 11 months ago
- [ACL 2024 Findings] "TempCompass: Do Video LLMs Really Understand Videos?", Yuanxin Liu, Shicheng Li, Yi Liu, Yuxiang Wang, Shuhuai Ren, …☆117Updated 2 months ago
- Video Chain of Thought, Codes for ICML 2024 paper: "Video-of-Thought: Step-by-Step Video Reasoning from Perception to Cognition"☆144Updated 4 months ago
- [CVPR 2024] Bridging the Gap: A Unified Video Comprehension Framework for Moment Retrieval and Highlight Detection☆95Updated 11 months ago
- 🌀 R2-Tuning: Efficient Image-to-Video Transfer Learning for Video Temporal Grounding (ECCV 2024)☆84Updated 11 months ago