showlab / videollm-online
VideoLLM-online: Online Video Large Language Model for Streaming Video (CVPR 2024)
☆293Updated 5 months ago
Alternatives and similar repositories for videollm-online:
Users that are interested in videollm-online are comparing it to the libraries listed below
- Long Context Transfer from Language to Vision☆359Updated 2 months ago
- This is the official code of VideoAgent: A Memory-augmented Multimodal Agent for Video Understanding (ECCV 2024)☆163Updated last month
- [CVPR 2024] TimeChat: A Time-sensitive Multimodal Large Language Model for Long Video Understanding☆327Updated 2 months ago
- This is the official implementation of "Flash-VStream: Memory-Based Real-Time Understanding for Long Video Streams"☆153Updated last month
- SlowFast-LLaVA: A Strong Training-Free Baseline for Video Large Language Models☆198Updated 4 months ago
- VideoChat-Flash: Hierarchical Compression for Long-Context Video Modeling☆279Updated 2 weeks ago
- PG-Video-LLaVA: Pixel Grounding in Large Multimodal Video Models☆249Updated last year
- Official Repository of paper VideoGPT+: Integrating Image and Video Encoders for Enhanced Video Understanding☆251Updated 5 months ago
- (2024CVPR) MA-LMM: Memory-Augmented Large Multimodal Model for Long-Term Video Understanding☆272Updated 6 months ago
- [CVPR 2024] MovieChat: From Dense Token to Sparse Memory for Long Video Understanding☆575Updated 2 weeks ago
- Tarsier -- a family of large-scale video-language models, which is designed to generate high-quality video descriptions , together with g…☆235Updated this week
- ✨✨Video-MME: The First-Ever Comprehensive Evaluation Benchmark of Multi-modal LLMs in Video Analysis☆449Updated last month
- [CVPR'2024 Highlight] Official PyTorch implementation of the paper "VTimeLLM: Empower LLM to Grasp Video Moments".☆245Updated 7 months ago
- [CVPR2024] ViP-LLaVA: Making Large Multimodal Models Understand Arbitrary Visual Prompts☆308Updated 6 months ago
- ☆156Updated 3 months ago
- LongLLaVA: Scaling Multi-modal LLMs to 1000 Images Efficiently via Hybrid Architecture☆188Updated 3 weeks ago
- [ICLR 2025] VILA-U: a Unified Foundation Model Integrating Visual Understanding and Generation☆212Updated last week
- Official repo and evaluation implementation of VSI-Bench☆356Updated this week
- Code for paper "VideoTree: Adaptive Tree-based Video Representation for LLM Reasoning on Long Videos"☆92Updated 5 months ago
- Awesome papers & datasets specifically focused on long-term videos.☆241Updated 2 months ago
- [AAAI-25] Cobra: Extending Mamba to Multi-modal Large Language Model for Efficient Inference☆267Updated 3 weeks ago
- ☆344Updated 2 months ago
- [COLM-2024] List Items One by One: A New Data Source and Learning Paradigm for Multimodal LLMs☆134Updated 5 months ago
- [ECCV 2024🔥] Official implementation of the paper "ST-LLM: Large Language Models Are Effective Temporal Learners"☆137Updated 4 months ago
- Official repository for the paper PLLaVA☆636Updated 6 months ago
- ☆130Updated 4 months ago
- ☆173Updated 6 months ago
- Code for ChatRex: Taming Multimodal LLM for Joint Perception and Understanding☆126Updated this week
- Official code for Goldfish model for long video understanding and MiniGPT4-video for short video understanding☆585Updated last month