π₯π₯First-ever hour scale video understanding models
β615Jul 14, 2025Updated 8 months ago
Alternatives and similar repositories for Video-XL
Users that are interested in Video-XL are comparing it to the libraries listed below
Sorting:
- π₯π₯MLVU: Multi-task Long Video Understanding Benchmarkβ242Aug 21, 2025Updated 7 months ago
- Long Context Transfer from Language to Visionβ402Mar 18, 2025Updated last year
- [ICLR2026] VideoChat-Flash: Hierarchical Compression for Long-Context Video Modelingβ511Nov 18, 2025Updated 4 months ago
- [ICML 2025] Official PyTorch implementation of LongVUβ424May 8, 2025Updated 10 months ago
- VideoNIAH: A Flexible Synthetic Method for Benchmarking Video MLLMsβ55Mar 9, 2025Updated last year
- LongLLaVA: Scaling Multi-modal LLMs to 1000 Images Efficiently via Hybrid Architectureβ213Jan 6, 2025Updated last year
- Frontier Multimodal Foundation Models for Image and Video Understandingβ1,128Aug 14, 2025Updated 7 months ago
- β¨First Open-Source R1-like Video-LLM [2025/02/18]β382Feb 23, 2025Updated last year
- π§ VideoMind: A Chain-of-LoRA Agent for Temporal-Grounded Video Reasoning (ICLR 2026)β311Feb 8, 2026Updated last month
- β4,607Sep 14, 2025Updated 6 months ago
- [CVPR 2024] MovieChat: From Dense Token to Sparse Memory for Long Video Understandingβ689Jan 29, 2025Updated last year
- A Versatile Video-LLM for Long and Short Video Understanding with Superior Temporal Localization Abilityβ106Nov 28, 2024Updated last year
- VideoLLaMA 2: Advancing Spatial-Temporal Modeling and Audio Understanding in Video-LLMsβ1,284Jan 23, 2025Updated last year
- Official repository for the paper PLLaVAβ676Jul 28, 2024Updated last year
- This is the official implementation of ICCV 2025 "Flash-VStream: Efficient Real-Time Understanding for Long Video Streams"β274Oct 15, 2025Updated 5 months ago
- β¨β¨[CVPR 2025] Video-MME: The First-Ever Comprehensive Evaluation Benchmark of Multi-modal LLMs in Video Analysisβ732Dec 8, 2025Updated 3 months ago
- Official implementation of paper AdaReTaKe: Adaptive Redundancy Reduction to Perceive Longer for Video-language Understandingβ88Apr 23, 2025Updated 10 months ago
- [ICCV 2025] Dynamic-VLMβ28Dec 16, 2024Updated last year
- [Neurips 24' D&B] Official Dataloader and Evaluation Scripts for LongVideoBench.β115Jul 27, 2024Updated last year
- [NIPS2025] VideoChat-R1 & R1.5: Enhancing Spatio-Temporal Perception and Reasoning via Reinforcement Fine-Tuningβ262Oct 18, 2025Updated 5 months ago
- β107Jul 30, 2024Updated last year
- [ICCV 2025] LVBench: An Extreme Long Video Understanding Benchmarkβ140Jul 9, 2025Updated 8 months ago
- Video-R1: Reinforcing Video Reasoning in MLLMs [π₯the first paper to explore R1 for video]β837Dec 14, 2025Updated 3 months ago
- Official implementation of paper ReTaKe: Reducing Temporal and Knowledge Redundancy for Long Video Understandingβ40Mar 16, 2025Updated last year
- π₯π₯π₯ [IEEE TCSVT] Latest Papers, Codes and Datasets on Vid-LLMs.β3,116Updated this week
- Tarsier -- a family of large-scale video-language models, which is designed to generate high-quality video descriptions , together with gβ¦β530Aug 14, 2025Updated 7 months ago
- [CVPR 2024] TimeChat: A Time-sensitive Multimodal Large Language Model for Long Video Understandingβ413May 8, 2025Updated 10 months ago
- TinyLLaVA-Video-R1: Towards Smaller LMMs for Video Reasoningβ115Dec 24, 2025Updated 2 months ago
- β¨β¨[NeurIPS 2025] This is the official implementation of our paper "Video-RAG: Visually-aligned Retrieval-Augmented Long Video Comprehensiβ¦β404Jan 14, 2026Updated 2 months ago
- Official GPU implementation of the paper "PPLLaVA: Varied Video Sequence Understanding With Prompt Guidance"β132Nov 19, 2024Updated last year
- β37Nov 8, 2024Updated last year
- [ECCV'24 Oral] PiTe: Pixel-Temporal Alignment for Large Video-Language Modelβ17Feb 13, 2025Updated last year
- [ECCV 2024π₯] Official implementation of the paper "ST-LLM: Large Language Models Are Effective Temporal Learners"β151Sep 10, 2024Updated last year
- Ego-R1: Chain-of-Tool-Thought for Ultra-Long Egocentric Video Reasoningβ143Aug 21, 2025Updated 7 months ago
- [CVPR2025 Highlight] Insight-V: Exploring Long-Chain Visual Reasoning with Multimodal Large Language Modelsβ237Nov 7, 2025Updated 4 months ago
- [ICML 2025] Official repository for paper "Scaling Video-Language Models to 10K Frames via Hierarchical Differential Distillation"β190Sep 23, 2025Updated 5 months ago
- [CVPR 2025 Oral] VideoEspresso: A Large-Scale Chain-of-Thought Dataset for Fine-Grained Video Reasoning via Core Frame Selectionβ137Jul 28, 2025Updated 7 months ago
- [ICLR 2025] TRACE: Temporal Grounding Video LLM via Casual Event Modelingβ150Aug 22, 2025Updated 6 months ago
- VILA is a family of state-of-the-art vision language models (VLMs) for diverse multimodal AI tasks across the edge, data center, and clouβ¦β3,786Mar 12, 2026Updated last week