ictnlp / LLaMA-OmniLinks
LLaMA-Omni is a low-latency and high-quality end-to-end speech interaction model built upon Llama-3.1-8B-Instruct, aiming to achieve speech capabilities at the GPT-4o level.
☆3,068Updated 3 months ago
Alternatives and similar repositories for LLaMA-Omni
Users that are interested in LLaMA-Omni are comparing it to the libraries listed below
Sorting:
- open-source multimodal large language model that can hear, talk while thinking. Featuring real-time end-to-end speech input and streaming…☆3,397Updated 10 months ago
- first base model for full-duplex conversational audio☆1,755Updated 8 months ago
- Local realtime voice AI☆2,361Updated 6 months ago
- Moshi is a speech-text foundation model and full-duplex spoken dialogue framework. It uses Mimi, a state-of-the-art streaming neural audi…☆8,866Updated last week
- Inference code for the paper "Spirit-LM Interleaved Spoken and Written Language Model".☆918Updated 10 months ago
- The official repo of Qwen2-Audio chat & pretrained large audio language model proposed by Alibaba Cloud.☆1,857Updated 4 months ago
- A fast multimodal LLM for real-time voice☆4,181Updated last week
- Interface for OuteTTS models.☆1,375Updated 2 months ago
- Fast and accurate automatic speech recognition (ASR) for edge devices☆2,855Updated last week
- Speech To Speech: an effort for an open-sourced and modular GPT4-o☆4,169Updated 4 months ago
- Hibiki is a model for streaming speech translation (also known as simultaneous translation). Unlike offline translation—where one waits f…☆1,262Updated 4 months ago
- The official repo of Qwen-Audio (通义千问-Audio) chat & pretrained large audio language model proposed by Alibaba Cloud.☆1,784Updated last year
- Towards Open-source GPT-4o with Vision, Speech and Duplex Capabilities。☆1,797Updated 7 months ago
- WhisperFusion builds upon the capabilities of WhisperLive and WhisperSpeech to provide a seamless conversations with an AI.☆1,630Updated last year
- GLM-4-Voice | 端到端中英语音对话模型☆3,031Updated 9 months ago
- Everything about the SmolLM and SmolVLM family of models☆3,211Updated last month
- Whisper with Medusa heads☆853Updated last month
- MobileLLM Optimizing Sub-billion Parameter Language Models for On-Device Use Cases. In ICML 2024.☆1,314Updated 4 months ago
- ☆909Updated this week
- Local SRT/LLM/TTS Voicechat☆716Updated 11 months ago
- ✨✨VITA-1.5: Towards GPT-4o Level Real-Time Vision and Speech Interaction☆2,401Updated 5 months ago
- Inference and training library for high-quality TTS models.☆5,411Updated 9 months ago
- Controllable and fast Text-to-Speech for over 7000 languages!☆1,637Updated 2 months ago
- Omni SenseVoice: High-Speed Speech Recognition with words timestamps 🗣️🎯☆865Updated 6 months ago
- Have a natural, spoken conversation with AI!☆3,142Updated 2 months ago
- Codebase for Aria - an Open Multimodal Native MoE☆1,067Updated 7 months ago
- Distilled variant of Whisper for speech recognition. 6x faster, 50% smaller, within 1% word error rate.☆3,944Updated 8 months ago
- [ICCV 2025] LLaVA-CoT, a visual language model capable of spontaneous, systematic reasoning☆2,059Updated last week
- Qwen2.5-Omni is an end-to-end multimodal model by Qwen team at Alibaba Cloud, capable of understanding text, audio, vision, video, and pe…☆3,598Updated 3 months ago
- SALMONN family: A suite of advanced multi-modal LLMs☆1,307Updated 2 weeks ago