QwenLM / Qwen2.5-OmniView external linksLinks
Qwen2.5-Omni is an end-to-end multimodal model by Qwen team at Alibaba Cloud, capable of understanding text, audio, vision, video, and performing real-time speech generation.
☆3,919Jun 12, 2025Updated 8 months ago
Alternatives and similar repositories for Qwen2.5-Omni
Users that are interested in Qwen2.5-Omni are comparing it to the libraries listed below
Sorting:
- Qwen3-VL is the multimodal large language model series developed by Qwen team, Alibaba Cloud.☆18,273Jan 30, 2026Updated 2 weeks ago
- Qwen3 is the large language model series developed by Qwen team, Alibaba Cloud.☆26,595Jan 9, 2026Updated last month
- ✨✨[NeurIPS 2025] VITA-1.5: Towards GPT-4o Level Real-Time Vision and Speech Interaction☆2,487Mar 28, 2025Updated 10 months ago
- Kimi-Audio, an open-source audio foundation model excelling in audio understanding, generation, and conversation☆4,489Jun 21, 2025Updated 7 months ago
- The official repo of Qwen2-Audio chat & pretrained large audio language model proposed by Alibaba Cloud.☆2,050Apr 21, 2025Updated 9 months ago
- GLM-4-Voice | 端到端中英语音对话模型☆3,140Dec 5, 2024Updated last year
- ☆4,613Updated this week
- A Gemini 2.5 Flash Level MLLM for Vision, Speech, and Full-Duplex Multimodal Live Streaming on Your Phone☆23,756Updated this week
- Qwen3-omni is a natively end-to-end, omni-modal LLM developed by the Qwen team at Alibaba Cloud, capable of understanding text, audio, im…☆3,429Jan 8, 2026Updated last month
- [CVPR 2024 Oral] InternVL Family: A Pioneering Open-Source Alternative to GPT-4o. 接近GPT-4o表现的开源多模态对话模型☆9,806Sep 22, 2025Updated 4 months ago
- Wan: Open and Advanced Large-Scale Video Generative Models☆15,327Dec 15, 2025Updated 2 months ago
- open-source multimodal large language model that can hear, talk while thinking. Featuring real-time end-to-end speech input and streaming…☆3,524Nov 5, 2024Updated last year
- Open-source unified multimodal model☆5,674Oct 27, 2025Updated 3 months ago
- Moshi is a speech-text foundation model and full-duplex spoken dialogue framework. It uses Mimi, a state-of-the-art streaming neural audi…☆9,642Updated this week
- Kimi-VL: Mixture-of-Experts Vision-Language Model for Multimodal Reasoning, Long-Context Understanding, and Strong Agent Capabilities☆1,161Jul 15, 2025Updated 7 months ago
- Multi-lingual large voice generation model, providing inference, training and deployment full-stack ability.☆19,578Feb 11, 2026Updated last week
- Use PEFT or Full-parameter to CPT/SFT/DPO/GRPO 600+ LLMs (Qwen3, Qwen3-MoE, DeepSeek-R1, GLM4.5, InternLM3, Llama4, ...) and 300+ MLLMs (…☆12,670Updated this week
- Agent framework and applications built upon Qwen>=3.0, featuring Function Calling, MCP, Code Interpreter, RAG, Chrome extension, etc.☆13,268Updated this week
- Solve Visual Understanding with Reinforced VLMs☆5,841Oct 21, 2025Updated 3 months ago
- The official repo of MiniMax-Text-01 and MiniMax-VL-01, large-language-model & vision-language-model based on Linear Attention☆3,330Jul 7, 2025Updated 7 months ago
- A high-throughput and memory-efficient inference and serving engine for LLMs☆70,205Updated this week
- Multilingual Voice Understanding Model☆7,497Dec 30, 2025Updated last month
- Next-Token Prediction is All You Need☆2,345Jan 12, 2026Updated last month
- The official repo of Qwen-VL (通义千问-VL) chat & pretrained large vision language model proposed by Alibaba Cloud.☆6,526Aug 7, 2024Updated last year
- ☆999Mar 24, 2025Updated 10 months ago
- The official repo of Qwen-Audio (通义千问-Audio) chat & pretrained large audio language model proposed by Alibaba Cloud.☆1,873Jul 5, 2024Updated last year
- Unified Efficient Fine-Tuning of 100+ LLMs & VLMs (ACL 2024)☆67,253Updated this week
- MAGI-1: Autoregressive Video Generation at Scale☆3,641Jun 17, 2025Updated 8 months ago
- [NeurIPS'23 Oral] Visual Instruction Tuning (LLaVA) built towards GPT-4V level capabilities and beyond.☆24,446Aug 12, 2024Updated last year
- Seed1.5-VL, a vision-language foundation model designed to advance general-purpose multimodal understanding and reasoning, achieving stat…☆1,544Jun 14, 2025Updated 8 months ago
- 🤗 R1-AQA Model: mispeech/r1-aqa☆314Mar 28, 2025Updated 10 months ago
- A Fundamental End-to-End Speech Recognition Toolkit and Open Source SOTA Pretrained Models, Supporting Speech Recognition, Voice Activity…☆14,891Feb 4, 2026Updated last week
- LLaMA-Omni is a low-latency and high-quality end-to-end speech interaction model built upon Llama-3.1-8B-Instruct, aiming to achieve spee…☆3,123May 19, 2025Updated 8 months ago
- A Framework for Speech, Language, Audio, Music Processing with Large Language Model☆972Jan 15, 2026Updated last month
- Witness the aha moment of VLM with less than $3.☆4,032May 19, 2025Updated 8 months ago
- Baichuan-Audio: A Unified Framework for End-to-End Speech Interaction☆217Feb 28, 2025Updated 11 months ago
- ✨✨Freeze-Omni: A Smart and Low Latency Speech-to-speech Dialogue Model with Frozen LLM☆365May 27, 2025Updated 8 months ago
- Fully open reproduction of DeepSeek-R1☆25,879Nov 24, 2025Updated 2 months ago
- Fast and memory-efficient exact attention☆22,231Updated this week