QwenLM / Qwen3-OmniLinks
Qwen3-omni is a natively end-to-end, omni-modal LLM developed by the Qwen team at Alibaba Cloud, capable of understanding text, audio, images, and video, as well as generating speech in real time.
☆2,699Updated last week
Alternatives and similar repositories for Qwen3-Omni
Users that are interested in Qwen3-Omni are comparing it to the libraries listed below
Sorting:
- MiniMax-M1, the world's first open-weight, large-scale hybrid-attention reasoning model.☆2,920Updated 3 months ago
- MiMo: Unlocking the Reasoning Potential of Language Model – From Pretraining to Posttraining☆1,596Updated 4 months ago
- GLM-4.5V and GLM-4.1V-Thinking: Towards Versatile Multimodal Reasoning with Scalable Reinforcement Learning