DAMO-NLP-SG / VideoLLaMA2
VideoLLaMA 2: Advancing Spatial-Temporal Modeling and Audio Understanding in Video-LLMs
☆847Updated this week
Related projects ⓘ
Alternatives and complementary repositories for VideoLLaMA2
- Official code for Goldfish model for long video understanding and MiniGPT4-video for short video understanding☆553Updated last month
- Official repository for the paper PLLaVA☆581Updated 3 months ago
- [CVPR 2024] MovieChat: From Dense Token to Sparse Memory for Long Video Understanding☆524Updated last week
- ✨✨Video-MME: The First-Ever Comprehensive Evaluation Benchmark of Multi-modal LLMs in Video Analysis☆402Updated 4 months ago
- Open-source evaluation toolkit of large vision-language models (LVLMs), support 160+ VLMs, 50+ benchmarks☆1,305Updated this week
- LLaMA-VID: An Image is Worth 2 Tokens in Large Language Models (ECCV 2024)☆728Updated 3 months ago
- [CVPR 2024] TimeChat: A Time-sensitive Multimodal Large Language Model for Long Video Understanding☆283Updated 5 months ago
- 🔥🔥🔥Latest Papers, Codes and Datasets on Vid-LLMs.☆1,511Updated last month
- [ECCV 2024] official code for "Long-CLIP: Unlocking the Long-Text Capability of CLIP"☆666Updated 2 months ago
- LLaVA-Plus: Large Language and Vision Assistants that Plug and Learn to Use Skills☆703Updated 9 months ago
- ☆2,824Updated 3 weeks ago
- A Framework of Small-scale Large Multimodal Models☆635Updated 3 weeks ago
- 【ICLR 2024🔥】 Extending Video-Language Pretraining to N-modality by Language-based Semantic Alignment☆717Updated 7 months ago
- Anole: An Open, Autoregressive and Native Multimodal Models for Interleaved Image-Text Generation☆676Updated 3 months ago
- Long Context Transfer from Language to Vision☆328Updated 2 weeks ago
- 🔥🔥 LLaVA++: Extending LLaVA with Phi-3 and LLaMA-3 (LLaVA LLaMA-3, LLaVA Phi-3)☆807Updated 3 months ago
- Repository for Show-o, One Single Transformer to Unify Multimodal Understanding and Generation.☆1,011Updated this week
- ✨✨VITA: Towards Open-Source Interactive Omni Multimodal LLM☆947Updated 2 weeks ago
- A family of lightweight multimodal models.☆928Updated 2 weeks ago
- Next-Token Prediction is All You Need☆1,793Updated 2 weeks ago
- [CVPR 2024] OneLLM: One Framework to Align All Modalities with Language☆583Updated 2 weeks ago
- Official Repository of paper VideoGPT+: Integrating Image and Video Encoders for Enhanced Video Understanding☆212Updated 2 months ago
- NeurIPS 2024 Paper: A Unified Pixel-level Vision LLM for Understanding, Generating, Segmenting, Editing☆357Updated 2 weeks ago
- A novel Multimodal Large Language Model (MLLM) architecture, designed to structurally align visual and textual embeddings.☆507Updated this week
- InternLM-XComposer-2.5: A Versatile Large Vision Language Model Supporting Long-Contextual Input and Output☆2,511Updated 3 weeks ago
- Janus: Decoupling Visual Encoding for Unified Multimodal Understanding and Generation☆913Updated last week
- LLaVA-UHD: an LMM Perceiving Any Aspect Ratio and High-Resolution Images☆318Updated last month
- Mixture-of-Experts for Large Vision-Language Models☆1,975Updated 5 months ago
- Cambrian-1 is a family of multimodal LLMs with a vision-centric design.☆1,751Updated last week
- [CVPR 2024] Panda-70M: Captioning 70M Videos with Multiple Cross-Modality Teachers☆523Updated 2 weeks ago