π₯π₯First-ever hour scale video understanding models
β610Jul 14, 2025Updated 7 months ago
Alternatives and similar repositories for Video-XL
Users that are interested in Video-XL are comparing it to the libraries listed below
Sorting:
- π₯π₯MLVU: Multi-task Long Video Understanding Benchmarkβ241Aug 21, 2025Updated 6 months ago
- Long Context Transfer from Language to Visionβ402Mar 18, 2025Updated 11 months ago
- [ICLR2026] VideoChat-Flash: Hierarchical Compression for Long-Context Video Modelingβ510Nov 18, 2025Updated 3 months ago
- [ICML 2025] Official PyTorch implementation of LongVUβ423May 8, 2025Updated 9 months ago
- β¨First Open-Source R1-like Video-LLM [2025/02/18]β381Feb 23, 2025Updated last year
- VideoNIAH: A Flexible Synthetic Method for Benchmarking Video MLLMsβ54Mar 9, 2025Updated 11 months ago
- A Versatile Video-LLM for Long and Short Video Understanding with Superior Temporal Localization Abilityβ106Nov 28, 2024Updated last year
- LongLLaVA: Scaling Multi-modal LLMs to 1000 Images Efficiently via Hybrid Architectureβ213Jan 6, 2025Updated last year
- Frontier Multimodal Foundation Models for Image and Video Understandingβ1,109Aug 14, 2025Updated 6 months ago
- β4,577Sep 14, 2025Updated 5 months ago
- β¨β¨[CVPR 2025] Video-MME: The First-Ever Comprehensive Evaluation Benchmark of Multi-modal LLMs in Video Analysisβ731Dec 8, 2025Updated 2 months ago
- This is the official implementation of ICCV 2025 "Flash-VStream: Efficient Real-Time Understanding for Long Video Streams"β271Oct 15, 2025Updated 4 months ago
- Official implementation of paper ReTaKe: Reducing Temporal and Knowledge Redundancy for Long Video Understandingβ40Mar 16, 2025Updated 11 months ago
- Official repository for the paper PLLaVAβ676Jul 28, 2024Updated last year
- π§ VideoMind: A Chain-of-LoRA Agent for Temporal-Grounded Video Reasoning (ICLR 2026)β305Feb 8, 2026Updated 3 weeks ago
- VideoLLaMA 2: Advancing Spatial-Temporal Modeling and Audio Understanding in Video-LLMsβ1,277Jan 23, 2025Updated last year
- [CVPR 2024] MovieChat: From Dense Token to Sparse Memory for Long Video Understandingβ686Jan 29, 2025Updated last year
- [NIPS2025] VideoChat-R1 & R1.5: Enhancing Spatio-Temporal Perception and Reasoning via Reinforcement Fine-Tuningβ259Oct 18, 2025Updated 4 months ago
- Video-R1: Reinforcing Video Reasoning in MLLMs [π₯the first paper to explore R1 for video]β831Dec 14, 2025Updated 2 months ago
- [Neurips 24' D&B] Official Dataloader and Evaluation Scripts for LongVideoBench.β113Jul 27, 2024Updated last year
- [ICCV 2025] Dynamic-VLMβ28Dec 16, 2024Updated last year
- π₯π₯π₯ [IEEE TCSVT] Latest Papers, Codes and Datasets on Vid-LLMs.β3,087Dec 20, 2025Updated 2 months ago
- β107Jul 30, 2024Updated last year
- TinyLLaVA-Video-R1: Towards Smaller LMMs for Video Reasoningβ114Dec 24, 2025Updated 2 months ago
- Official implementation of paper AdaReTaKe: Adaptive Redundancy Reduction to Perceive Longer for Video-language Understandingβ88Apr 23, 2025Updated 10 months ago
- [CVPR2025 Highlight] Insight-V: Exploring Long-Chain Visual Reasoning with Multimodal Large Language Modelsβ233Nov 7, 2025Updated 3 months ago
- [ICCV 2025] LVBench: An Extreme Long Video Understanding Benchmarkβ137Jul 9, 2025Updated 7 months ago
- Official GPU implementation of the paper "PPLLaVA: Varied Video Sequence Understanding With Prompt Guidance"β132Nov 19, 2024Updated last year
- [ECCV'24 Oral] PiTe: Pixel-Temporal Alignment for Large Video-Language Modelβ17Feb 13, 2025Updated last year
- Tarsier -- a family of large-scale video-language models, which is designed to generate high-quality video descriptions , together with gβ¦β520Aug 14, 2025Updated 6 months ago
- VILA is a family of state-of-the-art vision language models (VLMs) for diverse multimodal AI tasks across the edge, data center, and clouβ¦β3,766Nov 28, 2025Updated 3 months ago
- [CVPR'2025] VoCo-LLaMA: This repo is the official implementation of "VoCo-LLaMA: Towards Vision Compression with Large Language Models".β203Jun 18, 2025Updated 8 months ago
- [AAAI 26 Demo] Offical repo for CAT-V - Caption Anything in Video: Object-centric Dense Video Captioning with Spatiotemporal Multimodal Pβ¦β64Jan 27, 2026Updated last month
- πΎ E.T. Bench: Towards Open-Ended Event-Level Video-Language Understanding (NeurIPS 2024)β74Jan 20, 2025Updated last year
- Ego-R1: Chain-of-Tool-Thought for Ultra-Long Egocentric Video Reasoningβ140Aug 21, 2025Updated 6 months ago
- [ECCV 2024π₯] Official implementation of the paper "ST-LLM: Large Language Models Are Effective Temporal Learners"β150Sep 10, 2024Updated last year
- [ICLR 2025] TRACE: Temporal Grounding Video LLM via Casual Event Modelingβ145Aug 22, 2025Updated 6 months ago
- Solve Visual Understanding with Reinforced VLMsβ5,850Oct 21, 2025Updated 4 months ago
- β¨β¨[NeurIPS 2025] VITA-1.5: Towards GPT-4o Level Real-Time Vision and Speech Interactionβ2,490Mar 28, 2025Updated 11 months ago