InternLM / InternLM-XComposerLinks
InternLM-XComposer2.5-OmniLive: A Comprehensive Multimodal System for Long-term Streaming Video and Audio Interactions
☆2,909Updated 7 months ago
Alternatives and similar repositories for InternLM-XComposer
Users that are interested in InternLM-XComposer are comparing it to the libraries listed below
Sorting:
- 【TMM 2025🔥】 Mixture-of-Experts for Large Vision-Language Models☆2,285Updated 5 months ago
- GPT4V-level open-source multi-modal model based on Llama3-8B☆2,425Updated 9 months ago
- Emu Series: Generative Multimodal Models from BAAI☆1,761Updated last year
- Next-Token Prediction is All You Need☆2,271Updated last month
- ☆4,463Updated 3 months ago
- A family of lightweight multimodal models.☆1,049Updated last year
- A Next-Generation Training Engine Built for Ultra-Large MoE Models☆5,027Updated last week
- The official repo of Qwen-VL (通义千问-VL) chat & pretrained large vision language model proposed by Alibaba Cloud.☆6,446Updated last year
- Open-source evaluation toolkit of large multi-modality models (LMMs), support 220+ LMMs, 80+ benchmarks☆3,578Updated this week
- mPLUG-Owl: The Powerful Multi-modal Large Language Model Family☆2,539Updated 8 months ago
- 【EMNLP 2024🔥】Video-LLaVA: Learning United Visual Representation by Alignment Before Projection☆3,424Updated last year
- 【ICLR 2024🔥】 Extending Video-Language Pretraining to N-modality by Language-based Semantic Alignment☆855Updated last year
- A novel Multimodal Large Language Model (MLLM) architecture, designed to structurally align visual and textual embeddings.☆1,426Updated 3 months ago
- Cambrian-1 is a family of multimodal LLMs with a vision-centric design.☆1,975Updated last month
- Official implementation of paper "MiniGPT-5: Interleaved Vision-and-Language Generation via Generative Vokens"☆863Updated 7 months ago
- VideoLLaMA 2: Advancing Spatial-Temporal Modeling and Audio Understanding in Video-LLMs☆1,261Updated 11 months ago
- A Framework of Small-scale Large Multimodal Models☆940Updated 8 months ago
- ✨✨[NeurIPS 2025] VITA-1.5: Towards GPT-4o Level Real-Time Vision and Speech Interaction