QwenLM / Qwen3-OmniView external linksLinks
Qwen3-omni is a natively end-to-end, omni-modal LLM developed by the Qwen team at Alibaba Cloud, capable of understanding text, audio, images, and video, as well as generating speech in real time.
☆3,429Jan 8, 2026Updated last month
Alternatives and similar repositories for Qwen3-Omni
Users that are interested in Qwen3-Omni are comparing it to the libraries listed below
Sorting:
- Qwen2.5-Omni is an end-to-end multimodal model by Qwen team at Alibaba Cloud, capable of understanding text, audio, vision, video, and pe…☆3,919Jun 12, 2025Updated 8 months ago
- Qwen3-VL is the multimodal large language model series developed by Qwen team, Alibaba Cloud.☆18,273Jan 30, 2026Updated 2 weeks ago
- Step-Audio 2 is an end-to-end multi-modal large language model designed for industry-strength audio understanding and speech conversation…☆1,336Sep 22, 2025Updated 4 months ago
- MiMo-Audio: Audio Language Models are Few-Shot Learners☆968Sep 20, 2025Updated 4 months ago
- The official repo of Qwen2-Audio chat & pretrained large audio language model proposed by Alibaba Cloud.☆2,050Apr 21, 2025Updated 9 months ago
- Kimi-Audio, an open-source audio foundation model excelling in audio understanding, generation, and conversation☆4,489Jun 21, 2025Updated 7 months ago
- Moshi is a speech-text foundation model and full-duplex spoken dialogue framework. It uses Mimi, a state-of-the-art streaming neural audi…☆9,642Updated this week
- PyTorch implementation of Audio Flamingo: Series of Advanced Audio Understanding Language Models☆994Dec 15, 2025Updated 2 months ago
- Qwen3 is the large language model series developed by Qwen team, Alibaba Cloud.☆26,595Jan 9, 2026Updated last month
- Open-source unified multimodal model☆5,674Oct 27, 2025Updated 3 months ago
- A Framework for Speech, Language, Audio, Music Processing with Large Language Model☆972Jan 15, 2026Updated last month
- LLaMA-Omni is a low-latency and high-quality end-to-end speech interaction model built upon Llama-3.1-8B-Instruct, aiming to achieve spee…☆3,123May 19, 2025Updated 8 months ago
- Kimi-VL: Mixture-of-Experts Vision-Language Model for Multimodal Reasoning, Long-Context Understanding, and Strong Agent Capabilities☆1,161Jul 15, 2025Updated 7 months ago
- Qwen-Image is a powerful image generation foundation model capable of complex text rendering and precise image editing.☆7,359Feb 10, 2026Updated last week
- GLM-4.6V/4.5V/4.1V-Thinking: Towards Versatile Multimodal Reasoning with Scalable Reinforcement Learning☆2,182Jan 27, 2026Updated 3 weeks ago
- A Gemini 2.5 Flash Level MLLM for Vision, Speech, and Full-Duplex Multimodal Live Streaming on Your Phone☆23,756Updated this week
- ✨✨[NeurIPS 2025] VITA-1.5: Towards GPT-4o Level Real-Time Vision and Speech Interaction☆2,487Mar 28, 2025Updated 10 months ago
- Next-Token Prediction is All You Need☆2,345Jan 12, 2026Updated last month
- GLM-4-Voice | 端到端中英语音对话模型☆3,140Dec 5, 2024Updated last year
- The official repo of MiniMax-Text-01 and MiniMax-VL-01, large-language-model & vision-language-model based on Linear Attention☆3,330Jul 7, 2025Updated 7 months ago
- 🤗 R1-AQA Model: mispeech/r1-aqa☆314Mar 28, 2025Updated 10 months ago
- The official repo of Qwen-Audio (通义千问-Audio) chat & pretrained large audio language model proposed by Alibaba Cloud.☆1,873Jul 5, 2024Updated last year
- A native-PyTorch library for large scale M-LLM (text/audio) training with tp/cp/dp.☆224Aug 6, 2025Updated 6 months ago
- Baichuan-Audio: A Unified Framework for End-to-End Speech Interaction☆217Feb 28, 2025Updated 11 months ago
- A high-throughput and memory-efficient inference and serving engine for LLMs☆70,205Updated this week
- ☆4,613Updated this week
- ✨✨Freeze-Omni: A Smart and Low Latency Speech-to-speech Dialogue Model with Frozen LLM☆365May 27, 2025Updated 8 months ago
- open-source multimodal large language model that can hear, talk while thinking. Featuring real-time end-to-end speech input and streaming…☆3,524Nov 5, 2024Updated last year
- Agent framework and applications built upon Qwen>=3.0, featuring Function Calling, MCP, Code Interpreter, RAG, Chrome extension, etc.☆13,268Updated this week
- Fast and memory-efficient exact attention☆22,231Updated this week
- text and image to video generation: CogVideoX (2024) and CogVideo (ICLR 2023)☆12,426Nov 4, 2025Updated 3 months ago
- Wan: Open and Advanced Large-Scale Video Generative Models☆14,236Dec 17, 2025Updated 2 months ago
- LLaSA: Scaling Train-time and Inference-time Compute for LLaMA-based Speech Synthesis☆654Jan 21, 2026Updated 3 weeks ago
- State-of-the-art audio codec with 90x compression factor. Supports 44.1kHz, 24kHz, and 16kHz mono/stereo audio.☆1,714Jan 26, 2026Updated 3 weeks ago
- FunCodec is a research-oriented toolkit for audio quantization and downstream applications, such as text-to-speech synthesis, music gener…☆441Jan 25, 2024Updated 2 years ago
- Wan: Open and Advanced Large-Scale Video Generative Models☆15,327Dec 15, 2025Updated 2 months ago
- [NeurIPS'23 Oral] Visual Instruction Tuning (LLaVA) built towards GPT-4V level capabilities and beyond.☆24,446Aug 12, 2024Updated last year
- SALMONN family: A suite of advanced multi-modal LLMs☆1,391Feb 3, 2026Updated 2 weeks ago
- Fully open reproduction of DeepSeek-R1☆25,879Nov 24, 2025Updated 2 months ago