DAMO-NLP-SG / VideoLLaMA2Links
VideoLLaMA 2: Advancing Spatial-Temporal Modeling and Audio Understanding in Video-LLMs
☆1,255Updated 10 months ago
Alternatives and similar repositories for VideoLLaMA2
Users that are interested in VideoLLaMA2 are comparing it to the libraries listed below
Sorting:
- Frontier Multimodal Foundation Models for Image and Video Understanding☆1,077Updated 4 months ago
- 【ICLR 2024🔥】 Extending Video-Language Pretraining to N-modality by Language-based Semantic Alignment☆852Updated last year
- LLaMA-VID: An Image is Worth 2 Tokens in Large Language Models (ECCV 2024)☆853Updated last year
- Official code for Goldfish model for long video understanding and MiniGPT4-video for short video understanding☆638Updated last year
- [CVPR 2024] MovieChat: From Dense Token to Sparse Memory for Long Video Understanding☆674Updated 10 months ago
- [ACL 2024 🔥] Video-ChatGPT is a video conversation model capable of generating meaningful conversation about videos. It combines the cap…☆1,477Updated 4 months ago
- [ECCV2024] Video Foundation Models & Data for Multimodal Understanding☆2,131Updated 2 weeks ago
- ✨✨[CVPR 2025] Video-MME: The First-Ever Comprehensive Evaluation Benchmark of Multi-modal LLMs in Video Analysis☆695Updated last week
- Official repository for the paper PLLaVA☆674Updated last year
- VideoChat-Flash: Hierarchical Compression for Long-Context Video Modeling☆490Updated last month
- [CVPR 2024] TimeChat: A Time-sensitive Multimodal Large Language Model for Long Video Understanding☆402Updated 7 months ago
- Tarsier -- a family of large-scale video-language models, which is designed to generate high-quality video descriptions , together with g…☆507Updated 4 months ago
- 🔥🔥First-ever hour scale video understanding models☆593Updated 5 months ago
- 🔥🔥🔥 [IEEE TCSVT] Latest Papers, Codes and Datasets on Vid-LLMs.☆2,966Updated 3 weeks ago
- A novel Multimodal Large Language Model (MLLM) architecture, designed to structurally align visual and textual embeddings.☆1,421Updated 2 months ago
- A Framework of Small-scale Large Multimodal Models☆933Updated 7 months ago
- [CVPR 2024 🔥] Grounding Large Multimodal Model (GLaMM), the first-of-its-kind model capable of generating natural language responses tha…☆934Updated 4 months ago
- [ICML 2025] Official PyTorch implementation of LongVU☆412Updated 7 months ago
- Video-R1: Reinforcing Video Reasoning in MLLMs [🔥the first paper to explore R1 for video]☆777Updated this week
- NeurIPS 2024 Paper: A Unified Pixel-level Vision LLM for Understanding, Generating, Segmenting, Editing☆578Updated last year
- 【TMM 2025🔥】 Mixture-of-Experts for Large Vision-Language Models☆2,283Updated 5 months ago
- [CVPR'2024 Highlight] Official PyTorch implementation of the paper "VTimeLLM: Empower LLM to Grasp Video Moments".☆293Updated last year
- ☆4,456Updated 3 months ago
- VisionLLM Series☆1,131Updated 9 months ago
- Official Repository of paper VideoGPT+: Integrating Image and Video Encoders for Enhanced Video Understanding☆291Updated 4 months ago
- This is the official code of VideoAgent: A Memory-augmented Multimodal Agent for Video Understanding (ECCV 2024)☆276Updated last year
- LLM2CLIP makes SOTA pretrained CLIP model more SOTA ever.☆568Updated 2 weeks ago
- 【EMNLP 2024🔥】Video-LLaVA: Learning United Visual Representation by Alignment Before Projection☆3,413Updated last year
- Next-Token Prediction is All You Need☆2,265Updated 3 weeks ago
- [ICCV 2025] LLaVA-CoT, a visual language model capable of spontaneous, systematic reasoning☆2,107Updated last week