Awesome papers & datasets specifically focused on long-term videos.
β355Oct 9, 2025Updated 5 months ago
Alternatives and similar repositories for Awesome_Long_Form_Video_Understanding
Users that are interested in Awesome_Long_Form_Video_Understanding are comparing it to the libraries listed below
Sorting:
- π₯π₯π₯ [IEEE TCSVT] Latest Papers, Codes and Datasets on Vid-LLMs.β3,108Dec 20, 2025Updated 2 months ago
- [CVPR'2024 Highlight] Official PyTorch implementation of the paper "VTimeLLM: Empower LLM to Grasp Video Moments".β296Jun 13, 2024Updated last year
- [CVPR 2024] MovieChat: From Dense Token to Sparse Memory for Long Video Understandingβ688Jan 29, 2025Updated last year
- A lightweight flexible Video-MLLM developed by TencentQQ Multimedia Research Team.β74Oct 14, 2024Updated last year
- [CVPR 2024] TimeChat: A Time-sensitive Multimodal Large Language Model for Long Video Understandingβ412May 8, 2025Updated 10 months ago
- [Neurips 24' D&B] Official Dataloader and Evaluation Scripts for LongVideoBench.β113Jul 27, 2024Updated last year
- Code for CVPR25 paper "VideoTree: Adaptive Tree-based Video Representation for LLM Reasoning on Long Videos"β155Jun 23, 2025Updated 8 months ago
- (2024CVPR) MA-LMM: Memory-Augmented Large Multimodal Model for Long-Term Video Understandingβ347Jul 19, 2024Updated last year
- [AAAI 2025] VTG-LLM: Integrating Timestamp Knowledge into Video LLMs for Enhanced Video Temporal Groundingβ126Dec 10, 2024Updated last year
- β¨β¨[CVPR 2025] Video-MME: The First-Ever Comprehensive Evaluation Benchmark of Multi-modal LLMs in Video Analysisβ731Dec 8, 2025Updated 3 months ago
- A Versatile Video-LLM for Long and Short Video Understanding with Superior Temporal Localization Abilityβ106Nov 28, 2024Updated last year
- β80Nov 24, 2024Updated last year
- β¨First Open-Source R1-like Video-LLM [2025/02/18]β382Feb 23, 2025Updated last year
- [ACL 2024 Findings] "TempCompass: Do Video LLMs Really Understand Videos?", Yuanxin Liu, Shicheng Li, Yi Liu, Yuxiang Wang, Shuhuai Ren, β¦β129Apr 4, 2025Updated 11 months ago
- π₯π₯MLVU: Multi-task Long Video Understanding Benchmarkβ242Aug 21, 2025Updated 6 months ago
- Video-R1: Reinforcing Video Reasoning in MLLMs [π₯the first paper to explore R1 for video]β835Dec 14, 2025Updated 3 months ago
- β109Dec 30, 2024Updated last year
- A simple and effective feature extractor for untrimmed videosβ13Sep 1, 2022Updated 3 years ago
- β107Jul 30, 2024Updated last year
- β140Nov 17, 2025Updated 3 months ago
- β204Jul 12, 2024Updated last year
- Long Context Transfer from Language to Visionβ402Mar 18, 2025Updated 11 months ago
- [ECCV2024] Video Foundation Models & Data for Multimodal Understandingβ2,208Dec 15, 2025Updated 2 months ago
- VideoNIAH: A Flexible Synthetic Method for Benchmarking Video MLLMsβ54Mar 9, 2025Updated last year
- β138Sep 29, 2024Updated last year
- [ECCV 2024] Learning Video Context as Interleaved Multimodal Sequencesβ43Mar 11, 2025Updated last year
- Accelerating Vision-Language Pretraining with Free Language Modeling (CVPR 2023)β32May 15, 2023Updated 2 years ago
- A paper list of some recent works about Token Compress for Vit and VLMβ850Mar 3, 2026Updated last week
- [CVPR 2025] PVC: Progressive Visual Token Compression for Unified Image and Video Processing in Large Vision-Language Models