VITA-MLLM / VITA
✨✨VITA: Towards Open-Source Interactive Omni Multimodal LLM
☆751Updated this week
Related projects: ⓘ
- VideoLLaMA 2: Advancing Spatial-Temporal Modeling and Audio Understanding in Video-LLMs☆719Updated this week
- The official repo of Qwen2-Audio chat & pretrained large audio language model proposed by Alibaba Cloud.☆1,069Updated last month
- Official code for Goldfish model for long video understanding and MiniGPT4-video for short video understanding☆535Updated last month
- LLaMA-VID: An Image is Worth 2 Tokens in Large Language Models (ECCV 2024)☆688Updated last month
- ✨✨Video-MME: The First-Ever Comprehensive Evaluation Benchmark of Multi-modal LLMs in Video Analysis☆365Updated 3 months ago
- Open-source evaluation toolkit of large vision-language models (LVLMs), support ~100 VLMs, 40+ benchmarks☆1,018Updated this week
- A family of lightweight multimodal models.☆877Updated 2 weeks ago
- 【ICLR 2024🔥】 Extending Video-Language Pretraining to N-modality by Language-based Semantic Alignment☆682Updated 5 months ago
- Code for "AnyGPT: Unified Multimodal LLM with Discrete Sequence Modeling"☆742Updated 3 weeks ago
- Official implementation of paper "MiniGPT-5: Interleaved Vision-and-Language Generation via Generative Vokens"☆842Updated 6 months ago
- ✨✨Woodpecker: Hallucination Correction for Multimodal Large Language Models. The first work to correct hallucinations in MLLMs.☆593Updated 3 months ago
- Qwen2-VL is the multimodal large language model series developed by Qwen team, Alibaba Cloud.☆1,904Updated this week
- Official code implementation of Vary-toy (Small Language Model Meets with Reinforced Vision Vocabulary)☆587Updated 2 weeks ago
- A Framework of Small-scale Large Multimodal Models☆568Updated last week
- Anole: An Open, Autoregressive and Native Multimodal Models for Interleaved Image-Text Generation☆644Updated last month
- 🔥🔥 LLaVA++: Extending LLaVA with Phi-3 and LLaMA-3 (LLaVA LLaMA-3, LLaVA Phi-3)☆794Updated 2 months ago
- InternLM-XComposer-2.5: A Versatile Large Vision Language Model Supporting Long-Contextual Input and Output☆2,449Updated 2 weeks ago
- Strong and Open Vision Language Assistant for Mobile Devices☆971Updated 5 months ago
- [CVPR 2024] OneLLM: One Framework to Align All Modalities with Language☆553Updated this week
- The official repo of Qwen-Audio (通义千问-Audio) chat & pretrained large audio language model proposed by Alibaba Cloud.☆1,390Updated 2 months ago
- Official repository for the paper PLLaVA☆551Updated last month
- GPT4V-level open-source multi-modal model based on Llama3-8B☆1,976Updated 2 weeks ago
- 利用HuggingFace的官方下载工具从镜像网站进行高速下载。☆738Updated 2 weeks ago
- Repository for Show-o, One Single Transformer to Unify Multimodal Understanding and Generation.☆797Updated this week
- [ACL 2024] GroundingGPT: Language-Enhanced Multi-modal Grounding Model☆283Updated last month
- ☆294Updated 3 months ago
- Latte: Latent Diffusion Transformer for Video Generation.☆1,637Updated last week
- 第一个支持中英文双语语音-文本多模态对话的开源可商用对话模型。便捷的语音输入将大幅改善以文本为输入的大模型的使用体验,同时避免了基于 ASR 解决方案的繁琐流程以及可能引入的错误。☆519Updated last year
- A Unified Pixel-level Vision LLM for Understanding, Generating, Segmenting, Editing☆282Updated 2 months ago
- VideoSys: An easy and efficient system for video generation☆1,633Updated this week