Awesome papers & datasets specifically focused on long-term videos.
β355Oct 9, 2025Updated 4 months ago
Alternatives and similar repositories for Awesome_Long_Form_Video_Understanding
Users that are interested in Awesome_Long_Form_Video_Understanding are comparing it to the libraries listed below
Sorting:
- π₯π₯π₯ [IEEE TCSVT] Latest Papers, Codes and Datasets on Vid-LLMs.β3,087Dec 20, 2025Updated 2 months ago
- [CVPR'2024 Highlight] Official PyTorch implementation of the paper "VTimeLLM: Empower LLM to Grasp Video Moments".β294Jun 13, 2024Updated last year
- [CVPR 2024] MovieChat: From Dense Token to Sparse Memory for Long Video Understandingβ686Jan 29, 2025Updated last year
- A lightweight flexible Video-MLLM developed by TencentQQ Multimedia Research Team.β74Oct 14, 2024Updated last year
- [CVPR 2024] TimeChat: A Time-sensitive Multimodal Large Language Model for Long Video Understandingβ409May 8, 2025Updated 9 months ago
- [Neurips 24' D&B] Official Dataloader and Evaluation Scripts for LongVideoBench.β113Jul 27, 2024Updated last year
- Code for CVPR25 paper "VideoTree: Adaptive Tree-based Video Representation for LLM Reasoning on Long Videos"β154Jun 23, 2025Updated 8 months ago
- (2024CVPR) MA-LMM: Memory-Augmented Large Multimodal Model for Long-Term Video Understandingβ346Jul 19, 2024Updated last year
- [AAAI 2025] VTG-LLM: Integrating Timestamp Knowledge into Video LLMs for Enhanced Video Temporal Groundingβ126Dec 10, 2024Updated last year
- β¨β¨[CVPR 2025] Video-MME: The First-Ever Comprehensive Evaluation Benchmark of Multi-modal LLMs in Video Analysisβ731Dec 8, 2025Updated 2 months ago
- A Versatile Video-LLM for Long and Short Video Understanding with Superior Temporal Localization Abilityβ106Nov 28, 2024Updated last year
- β80Nov 24, 2024Updated last year
- β¨First Open-Source R1-like Video-LLM [2025/02/18]β381Feb 23, 2025Updated last year
- [ACL 2024 Findings] "TempCompass: Do Video LLMs Really Understand Videos?", Yuanxin Liu, Shicheng Li, Yi Liu, Yuxiang Wang, Shuhuai Ren, β¦β129Apr 4, 2025Updated 10 months ago
- π₯π₯MLVU: Multi-task Long Video Understanding Benchmarkβ241Aug 21, 2025Updated 6 months ago
- Video-R1: Reinforcing Video Reasoning in MLLMs [π₯the first paper to explore R1 for video]β831Dec 14, 2025Updated 2 months ago
- β109Dec 30, 2024Updated last year
- A simple and effective feature extractor for untrimmed videosβ13Sep 1, 2022Updated 3 years ago
- β107Jul 30, 2024Updated last year
- β138Nov 17, 2025Updated 3 months ago
- β203Jul 12, 2024Updated last year
- Long Context Transfer from Language to Visionβ402Mar 18, 2025Updated 11 months ago
- [ECCV2024] Video Foundation Models & Data for Multimodal Understandingβ2,201Dec 15, 2025Updated 2 months ago
- VideoNIAH: A Flexible Synthetic Method for Benchmarking Video MLLMsβ54Mar 9, 2025Updated 11 months ago
- β138Sep 29, 2024Updated last year
- [ECCV 2024] Learning Video Context as Interleaved Multimodal Sequencesβ43Mar 11, 2025Updated 11 months ago
- Accelerating Vision-Language Pretraining with Free Language Modeling (CVPR 2023)β32May 15, 2023Updated 2 years ago
- A paper list of some recent works about Token Compress for Vit and VLMβ835Feb 24, 2026Updated last week
- [CVPR 2025] PVC: Progressive Visual Token Compression for Unified Image and Video Processing in Large Vision-Language Modelsβ51Jun 12, 2025Updated 8 months ago
- A new multi-shot video understanding benchmark Shot2Story with comprehensive video summaries and detailed shot-level captions.β168Jan 30, 2025Updated last year
- [NeurIPS'25] Time-R1: Post-Training Large Vision Language Model for Temporal Video Groundingβ79Dec 14, 2025Updated 2 months ago
- [AAAI 2025] Grounded Multi-Hop VideoQA in Long-Form Egocentric Videosβ32May 27, 2025Updated 9 months ago
- Official implementation of HawkEye: Training Video-Text LLMs for Grounding Text in Videosβ46Apr 29, 2024Updated last year
- [CVPR 2026] TimeLens: Rethinking Video Temporal Grounding with Multimodal LLMsβ103Feb 22, 2026Updated last week
- [ICLR2026] VideoChat-Flash: Hierarchical Compression for Long-Context Video Modelingβ511Nov 18, 2025Updated 3 months ago
- π R2-Tuning: Efficient Image-to-Video Transfer Learning for Video Temporal Grounding (ECCV 2024)β91Jul 2, 2024Updated last year
- Latest Advances on (RL based) Multimodal Reasoning and Generation in Multimodal Large Language Modelsβ47Oct 30, 2025Updated 4 months ago
- This is the official implementation of ICCV 2025 "Flash-VStream: Efficient Real-Time Understanding for Long Video Streams"β271Oct 15, 2025Updated 4 months ago
- β135Apr 16, 2025Updated 10 months ago