gpt-omni / mini-omniLinks
open-source multimodal large language model that can hear, talk while thinking. Featuring real-time end-to-end speech input and streaming audio output conversational capabilities.
☆3,385Updated 9 months ago
Alternatives and similar repositories for mini-omni
Users that are interested in mini-omni are comparing it to the libraries listed below
Sorting:
- Towards Open-source GPT-4o with Vision, Speech and Duplex Capabilities。☆1,787Updated 7 months ago
- LLaMA-Omni is a low-latency and high-quality end-to-end speech interaction model built upon Llama-3.1-8B-Instruct, aiming to achieve spee…☆2,980Updated 3 months ago
- The official repo of Qwen2-Audio chat & pretrained large audio language model proposed by Alibaba Cloud.☆1,843Updated 4 months ago
- GLM-4-Voice | 端到端中英语音对话模型☆3,015Updated 8 months ago
- The official repo of Qwen-Audio (通义千问-Audio) chat & pretrained large audio language model proposed by Alibaba Cloud.☆1,762Updated last year
- Interface for OuteTTS models.☆1,357Updated 2 months ago
- Inference code for the paper "Spirit-LM Interleaved Spoken and Written Language Model".☆917Updated 9 months ago
- first base model for full-duplex conversational audio☆1,750Updated 7 months ago
- ✨✨VITA-1.5: Towards GPT-4o Level Real-Time Vision and Speech Interaction☆2,387Updated 4 months ago
- [ICLR 2025] Hallo2: Long-Duration and High-Resolution Audio-driven Portrait Image Animation☆3,604Updated 5 months ago
- ✨✨VITA-Audio: Fast Interleaved Cross-Modal Token Generation for Efficient Large Speech-Language Model☆631Updated 3 months ago
- SpeechGPT Series: Speech Large Language Models☆1,389Updated last year
- [ICASSP 2024] 🍵 Matcha-TTS: A fast TTS architecture with conditional flow matching☆1,096Updated last week
- Qwen2.5-Omni is an end-to-end multimodal model by Qwen team at Alibaba Cloud, capable of understanding text, audio, vision, video, and pe…☆3,527Updated 2 months ago
- On device AI inference in minutes—now for MLX & GGUF and Qualcomm NPU, with Android and iOS coming soon.☆4,682Updated this week
- Speech To Speech: an effort for an open-sourced and modular GPT4-o☆4,147Updated 4 months ago
- ☆1,399Updated last year
- StreamSpeech is an “All in One” seamless model for offline and simultaneous speech recognition, speech translation and speech synthesis.☆1,136Updated last month
- Open-source industrial-grade ASR models supporting Mandarin, Chinese dialects and English, achieving a new SOTA on public Mandarin ASR be…☆1,267Updated 4 months ago
- [ICLR 2025] SOTA discrete acoustic codec models with 40/75 tokens per second for audio language modeling☆1,181Updated 5 months ago
- An AI-Powered Speech Processing Toolkit and Open Source SOTA Pretrained Models, Supporting Speech Enhancement, Separation, and Target Spe…☆3,241Updated last week
- Controllable and fast Text-to-Speech for over 7000 languages!☆1,633Updated last month
- Moshi is a speech-text foundation model and full-duplex spoken dialogue framework. It uses Mimi, a state-of-the-art streaming neural audi…☆8,784Updated last week
- Multilingual Voice Understanding Model☆6,424Updated last week
- Dolphin is a multilingual, multitask ASR model jointly trained by DataoceanAI and Tsinghua University.☆595Updated last month
- SALMONN family: A suite of advanced multi-modal LLMs☆1,300Updated last month
- Fast and accurate automatic speech recognition (ASR) for edge devices☆2,833Updated 3 months ago
- Voice Activity Detector(VAD) from TEN: low-latency, high-performance and lightweight☆1,305Updated last week
- ☆366Updated last year
- Build multimodal language agents for fast prototype and production☆2,542Updated 5 months ago