DAMO-NLP-SG / Video-LLaMA
[EMNLP 2023 Demo] Video-LLaMA: An Instruction-tuned Audio-Visual Language Model for Video Understanding
☆2,987Updated 10 months ago
Alternatives and similar repositories for Video-LLaMA:
Users that are interested in Video-LLaMA are comparing it to the libraries listed below
- [CVPR2024 Highlight][VideoChatGPT] ChatGPT with video understanding! And many more supported LMs such as miniGPT4, StableLM, and MOSS.☆3,213Updated 3 months ago
- An open-source framework for training large multimodal models.☆3,888Updated 7 months ago
- Multimodal-GPT☆1,497Updated last year
- An Open-source Toolkit for LLM Development☆2,768Updated 3 months ago
- 【EMNLP 2024🔥】Video-LLaVA: Learning United Visual Representation by Alignment Before Projection☆3,225Updated 4 months ago
- [ICLR 2024] Fine-tuning LLaMA to follow Instructions within 1 Hour and 1.2M Parameters☆5,856Updated last year
- mPLUG-Owl: The Powerful Multi-modal Large Language Model Family☆2,450Updated 2 weeks ago
- a state-of-the-art-level open visual language model | 多模态预训练模型☆6,476Updated 10 months ago
- Open-source and strong foundation image recognition models.☆3,176Updated 2 months ago
- The official repo of Qwen-VL (通义千问-VL) chat & pretrained large vision language model proposed by Alibaba Cloud.☆5,782Updated 8 months ago
- Caption-Anything is a versatile tool combining image segmentation, visual captioning, and ChatGPT, generating tailored captions with dive…☆1,734Updated last year
- LAVIS - A One-stop Library for Language-Vision Intelligence☆10,465Updated 4 months ago
- Code and models for ICML 2024 paper, NExT-GPT: Any-to-Any Multimodal Large Language Model☆3,482Updated 5 months ago
- [ACL 2024 🔥] Video-ChatGPT is a video conversation model capable of generating meaningful conversation about videos. It combines the cap…☆1,337Updated 2 weeks ago
- VideoLLaMA 2: Advancing Spatial-Temporal Modeling and Audio Understanding in Video-LLMs☆1,143Updated 2 months ago
- [ECCV2024] Video Foundation Models & Data for Multimodal Understanding☆1,810Updated last week
- Mixture-of-Experts for Large Vision-Language Models☆2,145Updated 4 months ago
- Emu Series: Generative Multimodal Models from BAAI☆1,706Updated 6 months ago
- ☆3,686Updated last month
- Project Page for "LISA: Reasoning Segmentation via Large Language Model"☆2,144Updated 2 months ago
- Macaw-LLM: Multi-Modal Language Modeling with Image, Video, Audio, and Text Integration☆1,561Updated 3 months ago
- InternLM-XComposer2.5-OmniLive: A Comprehensive Multimodal System for Long-term Streaming Video and Audio Interactions☆2,808Updated 2 months ago
- PyTorch code for BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation☆5,185Updated 8 months ago
- 【ICLR 2024🔥】 Extending Video-Language Pretraining to N-modality by Language-based Semantic Alignment☆803Updated last year
- LLaMA-VID: An Image is Worth 2 Tokens in Large Language Models (ECCV 2024)☆799Updated 8 months ago
- 🦦 Otter, a multi-modal model based on OpenFlamingo (open-sourced version of DeepMind's Flamingo), trained on MIMIC-IT and showcasing imp…☆3,247Updated last year
- Strong and Open Vision Language Assistant for Mobile Devices☆1,191Updated last year
- Code and documents of LongLoRA and LongAlpaca (ICLR 2024 Oral)☆2,656Updated 8 months ago
- GPT4V-level open-source multi-modal model based on Llama3-8B☆2,332Updated last month
- Painter & SegGPT Series: Vision Foundation Models from BAAI☆2,564Updated 4 months ago