VITA-MLLM / VITA
✨✨VITA: Towards Open-Source Interactive Omni Multimodal LLM
☆964Updated 3 weeks ago
Related projects ⓘ
Alternatives and complementary repositories for VITA
- ✨✨Video-MME: The First-Ever Comprehensive Evaluation Benchmark of Multi-modal LLMs in Video Analysis☆406Updated 5 months ago
- Open-source evaluation toolkit of large vision-language models (LVLMs), support 160+ VLMs, 50+ benchmarks☆1,361Updated this week
- The official repo of Qwen2-Audio chat & pretrained large audio language model proposed by Alibaba Cloud.☆1,230Updated 3 months ago
- VideoLLaMA 2: Advancing Spatial-Temporal Modeling and Audio Understanding in Video-LLMs☆877Updated last week
- Next-Token Prediction is All You Need☆1,824Updated 3 weeks ago
- A novel Multimodal Large Language Model (MLLM) architecture, designed to structurally align visual and textual embeddings.☆526Updated 2 weeks ago
- Official code for Goldfish model for long video understanding and MiniGPT4-video for short video understanding☆559Updated last month
- Code for "AnyGPT: Unified Multimodal LLM with Discrete Sequence Modeling"☆780Updated 2 months ago
- Baichuan-Omni: Towards Capable Open-source Omni-modal LLM 🌊☆234Updated 2 weeks ago
- Janus-Series: Unified Multimodal Understanding and Generation Models☆1,084Updated last week
- A Framework of Small-scale Large Multimodal Models☆652Updated last month
- ☆819Updated last month
- ✨✨Woodpecker: Hallucination Correction for Multimodal Large Language Models. The first work to correct hallucinations in MLLMs.☆612Updated 5 months ago
- GPT4V-level open-source multi-modal model based on Llama3-8B☆2,122Updated 2 months ago
- The official repo of Qwen-Audio (通义千问-Audio) chat & pretrained large audio language model proposed by Alibaba Cloud.☆1,490Updated 4 months ago
- Qwen2-VL is the multimodal large language model series developed by Qwen team, Alibaba Cloud.☆3,153Updated last month
- 【ICLR 2024🔥】 Extending Video-Language Pretraining to N-modality by Language-based Semantic Alignment☆726Updated 7 months ago
- A family of lightweight multimodal models.☆933Updated this week
- Towards Open-source GPT-4o with Vision, Speech and Duplex Capabilities。☆1,565Updated 2 weeks ago
- ☆2,898Updated last month
- Efficient Multimodal Large Language Models: A Survey☆278Updated 3 months ago
- Repository for Show-o, One Single Transformer to Unify Multimodal Understanding and Generation.☆1,029Updated last week
- InternLM-XComposer-2.5: A Versatile Large Vision Language Model Supporting Long-Contextual Input and Output☆2,525Updated last month
- Official repository for the paper PLLaVA☆593Updated 3 months ago
- LLaMA-VID: An Image is Worth 2 Tokens in Large Language Models (ECCV 2024)☆736Updated 3 months ago
- Anole: An Open, Autoregressive and Native Multimodal Models for Interleaved Image-Text Generation☆675Updated 3 months ago
- 🔥🔥 LLaVA++: Extending LLaVA with Phi-3 and LLaMA-3 (LLaVA LLaMA-3, LLaVA Phi-3)☆813Updated 4 months ago
- 第一个支持 中英文双语语音-文本多模态对话的开源可商用对话模型。便捷的语音输入将大幅改善以文本为输入的大模型的使用体验,同时避免了基于 ASR 解决方案的繁琐流程以及可能引入的错误。☆537Updated last year
- NeurIPS 2024 Paper: A Unified Pixel-level Vision LLM for Understanding, Generating, Segmenting, Editing☆385Updated last month
- [CVPR 2024] OneLLM: One Framework to Align All Modalities with Language☆590Updated 3 weeks ago