kyutai-labs / moshiLinks
Moshi is a speech-text foundation model and full-duplex spoken dialogue framework. It uses Mimi, a state-of-the-art streaming neural audio codec.
☆8,999Updated this week
Alternatives and similar repositories for moshi
Users that are interested in moshi are comparing it to the libraries listed below
Sorting:
- LLaMA-Omni is a low-latency and high-quality end-to-end speech interaction model built upon Llama-3.1-8B-Instruct, aiming to achieve spee…☆3,079Updated 5 months ago
- Fast and accurate automatic speech recognition (ASR) for edge devices☆2,918Updated this week
- A fast multimodal LLM for real-time voice☆4,226Updated last month
- Speech To Speech: an effort for an open-sourced and modular GPT4-o☆4,209Updated 6 months ago
- Local realtime voice AI☆2,373Updated 7 months ago
- first base model for full-duplex conversational audio☆1,766Updated 9 months ago
- Open Source framework for voice and multimodal conversational AI☆8,410Updated this week
- open-source multimodal large language model that can hear, talk while thinking. Featuring real-time end-to-end speech input and streaming…☆3,419Updated 11 months ago
- Towards Human-Sounding Speech☆5,617Updated 5 months ago
- Inference and training library for high-quality TTS models.☆5,442Updated 10 months ago
- GLM-4-Voice | 端到端中英语音对话模型☆3,060Updated 10 months ago
- An AI-Powered Speech Processing Toolkit and Open Source SOTA Pretrained Models, Supporting Speech Enhancement, Separation, and Target Spe…☆3,509Updated 2 months ago
- Hibiki is a model for streaming speech translation (also known as simultaneous translation). Unlike offline translation—where one waits f…☆1,297Updated 6 months ago
- Distilled variant of Whisper for speech recognition. 6x faster, 50% smaller, within 1% word error rate.☆3,958Updated 9 months ago
- StyleTTS 2: Towards Human-Level Text-to-Speech through Style Diffusion and Adversarial Training with Large Speech Language Models☆5,995Updated last year
- g1: Using Llama-3.1 70b on Groq to create o1-like reasoning chains☆4,223Updated last month
- Controllable and fast Text-to-Speech for over 7000 languages!☆1,645Updated 3 months ago
- Kimi-Audio, an open-source audio foundation model excelling in audio understanding, generation, and conversation☆4,295Updated 4 months ago
- Everything about the SmolLM and SmolVLM family of models☆3,314Updated last month
- Silero VAD: pre-trained enterprise-grade Voice Activity Detector☆7,066Updated last week
- ☆4,535Updated 4 months ago
- The python library for real-time communication☆4,343Updated last month
- Run PyTorch LLMs locally on servers, desktop and mobile☆3,615Updated last month
- The official repo of Qwen2-Audio chat & pretrained large audio language model proposed by Alibaba Cloud.☆1,903Updated 6 months ago
- Whisper realtime streaming for long speech-to-text transcription and translation☆3,388Updated last month
- Amphion (/æmˈfaɪən/) is a toolkit for Audio, Music, and Speech Generation. Its purpose is to support reproducible research and help junio…☆9,439Updated 4 months ago
- Inference code for the paper "Spirit-LM Interleaved Spoken and Written Language Model".☆924Updated 11 months ago
- Whisper with Medusa heads☆861Updated 2 months ago
- Foundational model for human-like, expressive TTS☆4,187Updated last year
- Interface for OuteTTS models.☆1,388Updated 4 months ago