π₯π₯MLVU: Multi-task Long Video Understanding Benchmark
β242Aug 21, 2025Updated 6 months ago
Alternatives and similar repositories for MLVU
Users that are interested in MLVU are comparing it to the libraries listed below
Sorting:
- VideoNIAH: A Flexible Synthetic Method for Benchmarking Video MLLMsβ54Mar 9, 2025Updated 11 months ago
- Long Context Transfer from Language to Visionβ402Mar 18, 2025Updated 11 months ago
- [Neurips 24' D&B] Official Dataloader and Evaluation Scripts for LongVideoBench.β113Jul 27, 2024Updated last year
- π₯π₯First-ever hour scale video understanding modelsβ611Jul 14, 2025Updated 7 months ago
- β¨β¨[CVPR 2025] Video-MME: The First-Ever Comprehensive Evaluation Benchmark of Multi-modal LLMs in Video Analysisβ731Dec 8, 2025Updated 2 months ago
- [ICCV 2025] LVBench: An Extreme Long Video Understanding Benchmarkβ137Jul 9, 2025Updated 7 months ago
- β32Jul 29, 2024Updated last year
- Official implementation of paper AdaReTaKe: Adaptive Redundancy Reduction to Perceive Longer for Video-language Understandingβ88Apr 23, 2025Updated 10 months ago
- β109Dec 30, 2024Updated last year
- [ICLR2026] VideoChat-Flash: Hierarchical Compression for Long-Context Video Modelingβ511Nov 18, 2025Updated 3 months ago
- This is the official implementation of ICCV 2025 "Flash-VStream: Efficient Real-Time Understanding for Long Video Streams"β273Oct 15, 2025Updated 4 months ago
- [ACL 2024 Findings] "TempCompass: Do Video LLMs Really Understand Videos?", Yuanxin Liu, Shicheng Li, Yi Liu, Yuxiang Wang, Shuhuai Ren, β¦β129Apr 4, 2025Updated 11 months ago
- Official implementation of paper ReTaKe: Reducing Temporal and Knowledge Redundancy for Long Video Understandingβ40Mar 16, 2025Updated 11 months ago
- β37Sep 16, 2024Updated last year
- VideoLLaMA 2: Advancing Spatial-Temporal Modeling and Audio Understanding in Video-LLMsβ1,278Jan 23, 2025Updated last year
- [CVPR 2024] MovieChat: From Dense Token to Sparse Memory for Long Video Understandingβ686Jan 29, 2025Updated last year
- β80Nov 24, 2024Updated last year
- Official InfiniBench: A Benchmark for Large Multi-Modal Models in Long-Form Movies and TV Showsβ19Nov 4, 2025Updated 4 months ago
- β156Oct 31, 2024Updated last year
- β¨First Open-Source R1-like Video-LLM [2025/02/18]β381Feb 23, 2025Updated last year
- Official code of *Towards Event-oriented Long Video Understanding*β12Jul 26, 2024Updated last year
- Awesome papers & datasets specifically focused on long-term videos.β355Oct 9, 2025Updated 4 months ago
- VILA is a family of state-of-the-art vision language models (VLMs) for diverse multimodal AI tasks across the edge, data center, and clouβ¦β3,771Nov 28, 2025Updated 3 months ago
- β159Jan 16, 2025Updated last year
- Official repository for the paper PLLaVAβ676Jul 28, 2024Updated last year
- Official Repository of paper VideoGPT+: Integrating Image and Video Encoders for Enhanced Video Understandingβ293Aug 5, 2025Updated 7 months ago
- A Versatile Video-LLM for Long and Short Video Understanding with Superior Temporal Localization Abilityβ106Nov 28, 2024Updated last year
- [CVPR2024 Highlight][VideoChatGPT] ChatGPT with video understanding! And many more supported LMs such as miniGPT4, StableLM, and MOSS.β3,335Jan 18, 2025Updated last year
- One-for-All Multimodal Evaluation Toolkit Across Text, Image, Video, and Audio Tasksβ3,750Updated this week
- β37Nov 8, 2024Updated last year
- β54Mar 19, 2025Updated 11 months ago
- A lightweight flexible Video-MLLM developed by TencentQQ Multimedia Research Team.β74Oct 14, 2024Updated last year
- β138Sep 29, 2024Updated last year
- Harnessing 1.4M GPT4V-synthesized Data for A Lite Vision-Language Modelβ281Jun 25, 2024Updated last year
- β4,577Sep 14, 2025Updated 5 months ago
- [EMNLP'23] The official GitHub page for ''Evaluating Object Hallucination in Large Vision-Language Models''β107Aug 21, 2025Updated 6 months ago
- Repo for paper "T2Vid: Translating Long Text into Multi-Image is the Catalyst for Video-LLMs"β48Sep 3, 2025Updated 6 months ago
- β242Jun 4, 2025Updated 9 months ago
- β139Nov 17, 2025Updated 3 months ago