VectorSpaceLab / Video-XL
π₯π₯First-ever hour scale video understanding models
β259Updated this week
Alternatives and similar repositories for Video-XL:
Users that are interested in Video-XL are comparing it to the libraries listed below
- This is the official implementation of "Flash-VStream: Memory-Based Real-Time Understanding for Long Video Streams"β173Updated 3 months ago
- Long Context Transfer from Language to Visionβ368Updated last week
- β181Updated 8 months ago
- Vision Search Assistant: Empower Vision-Language Models as Multimodal Search Enginesβ117Updated 4 months ago
- This is the official implementation of our paper "Video-RAG: Visually-aligned Retrieval-Augmented Long Video Comprehension"β161Updated last month
- This is the official code of VideoAgent: A Memory-augmented Multimodal Agent for Video Understanding (ECCV 2024)β182Updated 3 months ago
- VideoChat-Flash: Hierarchical Compression for Long-Context Video Modelingβ373Updated this week
- SlowFast-LLaVA: A Strong Training-Free Baseline for Video Large Language Modelsβ206Updated 6 months ago
- β365Updated 3 weeks ago
- Official Repository of paper VideoGPT+: Integrating Image and Video Encoders for Enhanced Video Understandingβ264Updated 7 months ago
- [CVPR 2024] TimeChat: A Time-sensitive Multimodal Large Language Model for Long Video Understandingβ351Updated 4 months ago
- The Next Step Forward in Multimodal LLM Alignmentβ135Updated 3 weeks ago
- Tarsier -- a family of large-scale video-language models, which is designed to generate high-quality video descriptions , together with gβ¦β327Updated this week
- [CVPR'2024 Highlight] Official PyTorch implementation of the paper "VTimeLLM: Empower LLM to Grasp Video Moments".β261Updated 9 months ago
- Multimodal Models in Real Worldβ452Updated last month
- MM-Interleaved: Interleaved Image-Text Generative Modeling via Multi-modal Feature Synchronizerβ220Updated 11 months ago
- β139Updated 2 months ago
- [ECCV 2024π₯] Official implementation of the paper "ST-LLM: Large Language Models Are Effective Temporal Learners"β141Updated 6 months ago
- Explore the Limits of Omni-modal Pretraining at Scaleβ97Updated 6 months ago
- HumanOmniβ129Updated 2 weeks ago
- [ECCV 2024] ShareGPT4V: Improving Large Multi-modal Models with Better Captionsβ209Updated 8 months ago
- β80Updated 10 months ago
- β166Updated 8 months ago
- β¨First Open-Source R1-like Video-LLM [2025/02/18]β289Updated last month
- Official repository for the paper PLLaVAβ643Updated 7 months ago
- Valley is a cutting-edge multimodal large model designed to handle a variety of tasks involving text, images, and video data.β223Updated last month
- LongLLaVA: Scaling Multi-modal LLMs to 1000 Images Efficiently via Hybrid Architectureβ199Updated 2 months ago
- [ICLR 2025] LLaVA-MoD: Making LLaVA Tiny via MoE-Knowledge Distillationβ116Updated 2 months ago
- Image Textualization: An Automatic Framework for Generating Rich and Detailed Image Descriptions (NeurIPS 2024)β158Updated 7 months ago
- β176Updated 8 months ago