gpt-omni / mini-omniLinks
open-source multimodal large language model that can hear, talk while thinking. Featuring real-time end-to-end speech input and streaming audio output conversational capabilities.
☆3,512Updated last year
Alternatives and similar repositories for mini-omni
Users that are interested in mini-omni are comparing it to the libraries listed below
Sorting:
- Towards Open-source GPT-4o with Vision, Speech and Duplex Capabilities。☆1,858Updated last year
- LLaMA-Omni is a low-latency and high-quality end-to-end speech interaction model built upon Llama-3.1-8B-Instruct, aiming to achieve spee…☆3,114Updated 8 months ago
- The official repo of Qwen2-Audio chat & pretrained large audio language model proposed by Alibaba Cloud.☆2,034Updated 9 months ago
- GLM-4-Voice | 端到端中英语音对话模型☆3,129Updated last year
- ✨✨[NeurIPS 2025] VITA-1.5: Towards GPT-4o Level Real-Time Vision and Speech Interaction☆2,478Updated 10 months ago
- The official repo of Qwen-Audio (通义千问-Audio) chat & pretrained large audio language model proposed by Alibaba Cloud.☆1,865Updated last year
- Inference code for the paper "Spirit-LM Interleaved Spoken and Written Language Model".☆925Updated last year
- Interface for OuteTTS models.☆1,419Updated 7 months ago
- [ICLR 2025] Hallo2: Long-Duration and High-Resolution Audio-driven Portrait Image Animation☆3,669Updated 11 months ago
- first base model for full-duplex conversational audio☆1,770Updated last year
- [ICLR 2025] SOTA discrete acoustic codec models with 40/75 tokens per second for audio language modeling☆1,256Updated 10 months ago
- Speech To Speech: an effort for an open-sourced and modular GPT4-o☆4,274Updated 9 months ago
- Moshi is a speech-text foundation model and full-duplex spoken dialogue framework. It uses Mimi, a state-of-the-art streaming neural audi…☆9,463Updated last week
- SpeechGPT Series: Speech Large Language Models☆1,400Updated last year
- MiniCPM4 & MiniCPM4.1: Ultra-Efficient LLMs on End Devices, achieving 3+ generation speedup on reasoning tasks☆8,509Updated 3 months ago
- Qwen2.5-Omni is an end-to-end multimodal model by Qwen team at Alibaba Cloud, capable of understanding text, audio, vision, video, and pe…☆3,893Updated 7 months ago
- ✨✨[NeurIPS 2025] VITA-Audio: Fast Interleaved Cross-Modal Token Generation for Efficient Large Speech-Language Model☆670Updated 8 months ago
- [EMNLP-2024] Build multimodal language agents for fast prototype and production☆2,624Updated 10 months ago
- ☆994Updated 10 months ago
- Step-Audio 2 is an end-to-end multi-modal large language model designed for industry-strength audio understanding and speech conversation…☆1,313Updated 4 months ago
- ☆4,604Updated last month
- ☆1,517Updated last year
- SALMONN family: A suite of advanced multi-modal LLMs☆1,383Updated 3 months ago
- Inference and training library for high-quality TTS models.☆5,513Updated last year
- Local SRT/LLM/TTS Voicechat☆752Updated last year
- Memory-Guided Diffusion for Expressive Talking Video Generation☆1,076Updated 5 months ago
- [CVPR 2025] Hallo3: Highly Dynamic and Realistic Portrait Image Animation with Video Diffusion Transformer☆1,359Updated 10 months ago
- GPT4V-level open-source multi-modal model based on Llama3-8B☆2,427Updated 10 months ago
- Align Anything: Training All-modality Model with Feedback☆4,624Updated 2 months ago
- Code for "AnyGPT: Unified Multimodal LLM with Discrete Sequence Modeling"☆865Updated last year