EvolvingLMMs-Lab / LongVA
Long Context Transfer from Language to Vision
☆371Updated last month
Alternatives and similar repositories for LongVA:
Users that are interested in LongVA are comparing it to the libraries listed below
- [CVPR 2024] TimeChat: A Time-sensitive Multimodal Large Language Model for Long Video Understanding☆358Updated 4 months ago
- This is the official implementation of "Flash-VStream: Memory-Based Real-Time Understanding for Long Video Streams"☆177Updated 3 months ago
- ☆368Updated last month
- SlowFast-LLaVA: A Strong Training-Free Baseline for Video Large Language Models☆213Updated 7 months ago
- EVE Series: Encoder-Free Vision-Language Models from BAAI☆320Updated last month
- LongLLaVA: Scaling Multi-modal LLMs to 1000 Images Efficiently via Hybrid Architecture☆201Updated 3 months ago
- ✨✨[CVPR 2025] Video-MME: The First-Ever Comprehensive Evaluation Benchmark of Multi-modal LLMs in Video Analysis☆523Updated this week
- VideoChat-Flash: Hierarchical Compression for Long-Context Video Modeling☆391Updated last week
- (2024CVPR) MA-LMM: Memory-Augmented Large Multimodal Model for Long-Term Video Understanding☆296Updated 8 months ago
- This is the official code of VideoAgent: A Memory-augmented Multimodal Agent for Video Understanding (ECCV 2024)☆189Updated 4 months ago
- LVBench: An Extreme Long Video Understanding Benchmark☆86Updated 7 months ago
- [ICLR 2025] VILA-U: a Unified Foundation Model Integrating Visual Understanding and Generation☆280Updated 2 months ago
- LLaVA-UHD v2: an MLLM Integrating High-Resolution Semantic Pyramid via Hierarchical Window Transformer☆373Updated 2 weeks ago
- [ICLR2025] LLaVA-HR: High-Resolution Large Language-Vision Assistant☆235Updated 8 months ago
- MM-Interleaved: Interleaved Image-Text Generative Modeling via Multi-modal Feature Synchronizer☆221Updated last year
- [ICLR 2025 Spotlight] OmniCorpus: A Unified Multimodal Corpus of 10 Billion-Level Images Interleaved with Text☆338Updated 3 weeks ago
- [ACL 2024 Findings] "TempCompass: Do Video LLMs Really Understand Videos?", Yuanxin Liu, Shicheng Li, Yi Liu, Yuxiang Wang, Shuhuai Ren, …☆110Updated 2 weeks ago
- [ECCV 2024 Oral] Code for paper: An Image is Worth 1/2 Tokens After Layer 2: Plug-and-Play Inference Acceleration for Large Vision-Langua…☆407Updated 3 months ago
- 🔥🔥First-ever hour scale video understanding models☆281Updated last week
- PG-Video-LLaVA: Pixel Grounding in Large Multimodal Video Models☆257Updated last year
- [CVPR2025 Highlight] Insight-V: Exploring Long-Chain Visual Reasoning with Multimodal Large Language Models☆181Updated 2 weeks ago
- MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities (ICML 2024)☆294Updated 2 months ago
- 🔥🔥MLVU: Multi-task Long Video Understanding Benchmark☆195Updated 3 weeks ago
- ✨✨Beyond LLaVA-HD: Diving into High-Resolution Large Multimodal Models☆155Updated 3 months ago
- Official code for Paper "Mantis: Multi-Image Instruction Tuning" [TMLR2024]☆214Updated 3 weeks ago
- ☆183Updated 9 months ago
- Video-R1: Reinforcing Video Reasoning in MLLMs [🔥the first paper to explore R1 for video]☆391Updated last week
- Tarsier -- a family of large-scale video-language models, which is designed to generate high-quality video descriptions , together with g…☆349Updated 3 weeks ago
- Official Repo of "MMBench: Is Your Multi-modal Model an All-around Player?"☆200Updated 7 months ago
- [CVPR'2025] VoCo-LLaMA: This repo is the official implementation of "VoCo-LLaMA: Towards Vision Compression with Large Language Models".☆151Updated last month