[ICCV 2025] LVBench: An Extreme Long Video Understanding Benchmark
β141Jul 9, 2025Updated 8 months ago
Alternatives and similar repositories for LVBench
Users that are interested in LVBench are comparing it to the libraries listed below
Sorting:
- [Neurips 24' D&B] Official Dataloader and Evaluation Scripts for LongVideoBench.β115Jul 27, 2024Updated last year
- π₯π₯MLVU: Multi-task Long Video Understanding Benchmarkβ242Aug 21, 2025Updated 7 months ago
- β32Jul 29, 2024Updated last year
- VideoNIAH: A Flexible Synthetic Method for Benchmarking Video MLLMsβ55Mar 9, 2025Updated last year
- β11Aug 4, 2024Updated last year
- β109Dec 30, 2024Updated last year
- Long Context Transfer from Language to Visionβ402Mar 18, 2025Updated last year
- Official InfiniBench: A Benchmark for Large Multi-Modal Models in Long-Form Movies and TV Showsβ19Nov 4, 2025Updated 4 months ago
- β¨β¨[CVPR 2025] Video-MME: The First-Ever Comprehensive Evaluation Benchmark of Multi-modal LLMs in Video Analysisβ732Dec 8, 2025Updated 3 months ago
- [CVPR 2024] MovieChat: From Dense Token to Sparse Memory for Long Video Understandingβ689Jan 29, 2025Updated last year
- [ACL 2024 Findings] "TempCompass: Do Video LLMs Really Understand Videos?", Yuanxin Liu, Shicheng Li, Yi Liu, Yuxiang Wang, Shuhuai Ren, β¦β130Apr 4, 2025Updated 11 months ago
- [ICLR2026] VideoChat-Flash: Hierarchical Compression for Long-Context Video Modelingβ511Nov 18, 2025Updated 4 months ago
- Official code of *Towards Event-oriented Long Video Understanding*β12Jul 26, 2024Updated last year
- The code for "VISTA: Enhancing Long-Duration and High-Resolution Video Understanding by VIdeo SpatioTemporal Augmentation" [CVPR2025]β21Feb 27, 2025Updated last year
- π₯π₯First-ever hour scale video understanding modelsβ616Jul 14, 2025Updated 8 months ago
- A Massive Multi-Discipline Lecture Understanding Benchmarkβ33Nov 1, 2025Updated 4 months ago
- A lightweight flexible Video-MLLM developed by TencentQQ Multimedia Research Team.β74Oct 14, 2024Updated last year
- MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities (ICML 2024)β323Jan 20, 2025Updated last year
- [ICLR 2025] AuroraCap: Efficient, Performant Video Detailed Captioning and a New Benchmarkβ139Jun 4, 2025Updated 9 months ago
- β157Oct 31, 2024Updated last year
- [ICML 2025] Official PyTorch implementation of LongVUβ424May 8, 2025Updated 10 months ago
- β13Oct 19, 2023Updated 2 years ago
- [ECCV'24 Oral] PiTe: Pixel-Temporal Alignment for Large Video-Language Modelβ17Feb 13, 2025Updated last year
- πΎ E.T. Bench: Towards Open-Ended Event-Level Video-Language Understanding (NeurIPS 2024)β74Jan 20, 2025Updated last year
- official impelmentation of Kangaroo: A Powerful Video-Language Model Supporting Long-context Video Inputβ67Aug 30, 2024Updated last year
- Official implementation of paper ReTaKe: Reducing Temporal and Knowledge Redundancy for Long Video Understandingβ40Mar 16, 2025Updated last year
- [CVPR 2025] Adaptive Keyframe Sampling for Long Video Understandingβ190Dec 19, 2025Updated 3 months ago
- A Comprehensive Benchmark and Toolkit for Evaluating Video-based Large Language Models!β138Dec 31, 2023Updated 2 years ago
- [ACL 2024 π₯] Video-ChatGPT is a video conversation model capable of generating meaningful conversation about videos. It combines the capβ¦β1,499Aug 5, 2025Updated 7 months ago
- β17Feb 22, 2024Updated 2 years ago
- β107Jul 30, 2024Updated last year
- β222Jul 5, 2024Updated last year
- [EMNLP'23] The official GitHub page for ''Evaluating Object Hallucination in Large Vision-Language Models''β110Aug 21, 2025Updated 7 months ago
- [ECCV 2024π₯] Official implementation of the paper "ST-LLM: Large Language Models Are Effective Temporal Learners"β154Sep 10, 2024Updated last year
- VideoMathQA is a benchmark designed to evaluate mathematical reasoning in real-world educational videosβ23Jan 26, 2026Updated last month
- VideoLLaMA 2: Advancing Spatial-Temporal Modeling and Audio Understanding in Video-LLMsβ1,284Jan 23, 2025Updated last year
- β37Nov 8, 2024Updated last year
- VideoHallucer, The first comprehensive benchmark for hallucination detection in large video-language models (LVLMs)β42Dec 16, 2025Updated 3 months ago
- [ICLR 2025] Mathematical Visual Instruction Tuning for Multi-modal Large Language Modelsβ153Dec 5, 2024Updated last year