Qwen2.5-Omni is an end-to-end multimodal model by Qwen team at Alibaba Cloud, capable of understanding text, audio, vision, video, and performing real-time speech generation.
☆3,936Jun 12, 2025Updated 8 months ago
Alternatives and similar repositories for Qwen2.5-Omni
Users that are interested in Qwen2.5-Omni are comparing it to the libraries listed below
Sorting:
- Qwen3-VL is the multimodal large language model series developed by Qwen team, Alibaba Cloud.☆18,505Jan 30, 2026Updated last month
- Qwen3 is the large language model series developed by Qwen team, Alibaba Cloud.☆26,713Jan 9, 2026Updated last month
- ✨✨[NeurIPS 2025] VITA-1.5: Towards GPT-4o Level Real-Time Vision and Speech Interaction☆2,494Mar 28, 2025Updated 11 months ago
- Kimi-Audio, an open-source audio foundation model excelling in audio understanding, generation, and conversation☆4,502Jun 21, 2025Updated 8 months ago
- The official repo of Qwen2-Audio chat & pretrained large audio language model proposed by Alibaba Cloud.☆2,059Apr 21, 2025Updated 10 months ago
- GLM-4-Voice | 端到端中英语音对话模型☆3,144Dec 5, 2024Updated last year
- ☆4,613Feb 13, 2026Updated 3 weeks ago
- A Gemini 2.5 Flash Level MLLM for Vision, Speech, and Full-Duplex Multimodal Live Streaming on Your Phone☆24,027Feb 23, 2026Updated last week
- Qwen3-omni is a natively end-to-end, omni-modal LLM developed by the Qwen team at Alibaba Cloud, capable of understanding text, audio, im…☆3,460Jan 8, 2026Updated last month
- [CVPR 2024 Oral] InternVL Family: A Pioneering Open-Source Alternative to GPT-4o. 接近GPT-4o表现的开源多模态对话模型☆9,854Sep 22, 2025Updated 5 months ago
- Wan: Open and Advanced Large-Scale Video Generative Models☆15,434Dec 15, 2025Updated 2 months ago
- open-source multimodal large language model that can hear, talk while thinking. Featuring real-time end-to-end speech input and streaming…☆3,530Nov 5, 2024Updated last year
- Open-source unified multimodal model☆5,704Oct 27, 2025Updated 4 months ago
- Moshi is a speech-text foundation model and full-duplex spoken dialogue framework. It uses Mimi, a state-of-the-art streaming neural audi…☆9,750Feb 12, 2026Updated 3 weeks ago
- Kimi-VL: Mixture-of-Experts Vision-Language Model for Multimodal Reasoning, Long-Context Understanding, and Strong Agent Capabilities☆1,164Jul 15, 2025Updated 7 months ago
- Multi-lingual large voice generation model, providing inference, training and deployment full-stack ability.☆19,786Feb 11, 2026Updated 3 weeks ago
- Use PEFT or Full-parameter to CPT/SFT/DPO/GRPO 600+ LLMs (Qwen3.5, DeepSeek-R1, GLM4.5, InternLM3, Llama4, ...) and 300+ MLLMs (Qwen3-VL,…☆12,820Updated this week
- Solve Visual Understanding with Reinforced VLMs☆5,850Oct 21, 2025Updated 4 months ago
- Agent framework and applications built upon Qwen>=3.0, featuring Function Calling, MCP, Code Interpreter, RAG, Chrome extension, etc.☆13,451Feb 16, 2026Updated 2 weeks ago
- The official repo of MiniMax-Text-01 and MiniMax-VL-01, large-language-model & vision-language-model based on Linear Attention☆3,356Jul 7, 2025Updated 7 months ago
- A high-throughput and memory-efficient inference and serving engine for LLMs☆71,883Updated this week
- Multilingual Voice Understanding Model☆7,611Dec 30, 2025Updated 2 months ago
- Next-Token Prediction is All You Need☆2,355Jan 12, 2026Updated last month
- ☆997Mar 24, 2025Updated 11 months ago
- The official repo of Qwen-VL (通义千问-VL) chat & pretrained large vision language model proposed by Alibaba Cloud.☆6,545Aug 7, 2024Updated last year
- The official repo of Qwen-Audio (通义千问-Audio) chat & pretrained large audio language model proposed by Alibaba Cloud.☆1,875Jul 5, 2024Updated last year
- Unified Efficient Fine-Tuning of 100+ LLMs & VLMs (ACL 2024)☆67,966Updated this week
- MAGI-1: Autoregressive Video Generation at Scale☆3,647Jun 17, 2025Updated 8 months ago
- [NeurIPS'23 Oral] Visual Instruction Tuning (LLaVA) built towards GPT-4V level capabilities and beyond.☆24,500Aug 12, 2024Updated last year
- A Fundamental End-to-End Speech Recognition Toolkit and Open Source SOTA Pretrained Models, Supporting Speech Recognition, Voice Activity…☆15,036Updated this week
- Seed1.5-VL, a vision-language foundation model designed to advance general-purpose multimodal understanding and reasoning, achieving stat…☆1,551Jun 14, 2025Updated 8 months ago
- 🤗 R1-AQA Model: mispeech/r1-aqa☆314Mar 28, 2025Updated 11 months ago
- LLaMA-Omni is a low-latency and high-quality end-to-end speech interaction model built upon Llama-3.1-8B-Instruct, aiming to achieve spee…☆3,128May 19, 2025Updated 9 months ago
- Baichuan-Audio: A Unified Framework for End-to-End Speech Interaction☆218Feb 28, 2025Updated last year
- A Framework for Speech, Language, Audio, Music Processing with Large Language Model☆995Jan 15, 2026Updated last month
- Witness the aha moment of VLM with less than $3.☆4,036May 19, 2025Updated 9 months ago
- ✨✨Freeze-Omni: A Smart and Low Latency Speech-to-speech Dialogue Model with Frozen LLM☆368May 27, 2025Updated 9 months ago
- Fully open reproduction of DeepSeek-R1☆25,910Nov 24, 2025Updated 3 months ago
- Fast and memory-efficient exact attention☆22,460Updated this week