dvlab-research / LLaMA-VID
LLaMA-VID: An Image is Worth 2 Tokens in Large Language Models (ECCV 2024)
☆772Updated 7 months ago
Alternatives and similar repositories for LLaMA-VID:
Users that are interested in LLaMA-VID are comparing it to the libraries listed below
- [CVPR 2024] MovieChat: From Dense Token to Sparse Memory for Long Video Understanding☆590Updated last month
- Official code for Goldfish model for long video understanding and MiniGPT4-video for short video understanding☆591Updated 2 months ago
- 【ICLR 2024🔥】 Extending Video-Language Pretraining to N-modality by Language-based Semantic Alignment☆789Updated 11 months ago
- Official repository for the paper PLLaVA☆638Updated 7 months ago
- NeurIPS 2024 Paper: A Unified Pixel-level Vision LLM for Understanding, Generating, Segmenting, Editing☆491Updated 4 months ago
- LaVIT: Empower the Large Language Model to Understand and Generate Visual Content☆566Updated 4 months ago
- VideoLLaMA 2: Advancing Spatial-Temporal Modeling and Audio Understanding in Video-LLMs☆1,090Updated last month
- ✨✨Video-MME: The First-Ever Comprehensive Evaluation Benchmark of Multi-modal LLMs in Video Analysis☆470Updated 2 months ago
- Long Context Transfer from Language to Vision☆364Updated 3 months ago
- Official implementation of paper "MiniGPT-5: Interleaved Vision-and-Language Generation via Generative Vokens"☆860Updated 2 months ago
- [CVPR 2024] TimeChat: A Time-sensitive Multimodal Large Language Model for Long Video Understanding☆339Updated 3 months ago
- Tarsier -- a family of large-scale video-language models, which is designed to generate high-quality video descriptions , together with g…☆302Updated 2 weeks ago
- [ACL 2024] GroundingGPT: Language-Enhanced Multi-modal Grounding Model☆319Updated 3 months ago
- ☆766Updated 7 months ago
- Official implementation of SEED-LLaMA (ICLR 2024).☆596Updated 5 months ago
- VideoChat-Flash: Hierarchical Compression for Long-Context Video Modeling☆326Updated this week
- [ICLR 2024 Spotlight] DreamLLM: Synergistic Multimodal Comprehension and Creation☆419Updated 3 months ago
- Multimodal Models in Real World☆440Updated last week
- [CVPR 2024] Panda-70M: Captioning 70M Videos with Multiple Cross-Modality Teachers☆577Updated 4 months ago
- LLaVA-UHD v2: an MLLM Integrating High-Resolution Feature Pyramid via Hierarchical Window Transformer☆367Updated last month
- [CVPR 2024 🔥] Grounding Large Multimodal Model (GLaMM), the first-of-its-kind model capable of generating natural language responses tha…☆833Updated 3 months ago
- Emu Series: Generative Multimodal Models from BAAI☆1,691Updated 5 months ago
- ☆358Updated this week
- [ICLR 2025] Repository for Show-o, One Single Transformer to Unify Multimodal Understanding and Generation.☆1,230Updated this week
- [CVPR 2024] OneLLM: One Framework to Align All Modalities with Language☆621Updated 4 months ago
- LLaVA-Plus: Large Language and Vision Assistants that Plug and Learn to Use Skills☆726Updated last year
- [ECCV2024] Video Foundation Models & Data for Multimodal Understanding☆1,708Updated this week
- Official Repository of paper VideoGPT+: Integrating Image and Video Encoders for Enhanced Video Understanding☆259Updated 6 months ago
- PG-Video-LLaVA: Pixel Grounding in Large Multimodal Video Models☆252Updated last year
- [ECCV 2024] official code for "Long-CLIP: Unlocking the Long-Text Capability of CLIP"☆761Updated 6 months ago