lucasnewman / f5-tts-mlxLinks
Implementation of F5-TTS in MLX
☆592Updated 7 months ago
Alternatives and similar repositories for f5-tts-mlx
Users that are interested in f5-tts-mlx are comparing it to the libraries listed below
Sorting:
- Interface for OuteTTS models.☆1,390Updated 4 months ago
- An implementation of the CSM(Conversation Speech Model) for Apple Silicon using MLX.☆379Updated 2 months ago
- An extremely fast implementation of whisper optimized for Apple Silicon using MLX.☆796Updated last year
- Run Orpheus 3B Locally With LM Studio☆478Updated 7 months ago
- A real-time speech-to-speech chatbot powered by Whisper Small, Llama 3.2, and Kokoro-82M.☆245Updated 9 months ago
- A Fast TTS Engine☆555Updated 9 months ago
- Blazing fast whisper turbo for ASR (speech-to-text) tasks☆217Updated last year
- An implementation of the Nvidia's Parakeet models for Apple Silicon using MLX.☆532Updated 3 weeks ago
- Local SRT/LLM/TTS Voicechat☆732Updated last year
- Whisper with Medusa heads☆862Updated 2 months ago
- first base model for full-duplex conversational audio☆1,767Updated 9 months ago
- Fast Streaming TTS with Orpheus + WebRTC (with FastRTC)☆339Updated 6 months ago
- Phi-3.5 for Mac: Locally-run Vision and Language Models for Apple Silicon☆273Updated last year
- On-device Image Generation for Apple Silicon☆662Updated 6 months ago
- OpenAI compatible TTS for Sesame CSM:1b & dia:1.6b - Voice Cloning from File/YT☆415Updated last month
- Sesame CSM 1B Voice Cloning☆323Updated 7 months ago
- Python tools for WhisperKit: Model conversion, optimization and evaluation☆229Updated 2 months ago
- Mac compatible Ollama Voice☆503Updated 2 months ago
- Inference code for the paper "Spirit-LM Interleaved Spoken and Written Language Model".☆925Updated 11 months ago
- MLX Omni Server is a local inference server powered by Apple's MLX framework, specifically designed for Apple Silicon (M-series) chips. I…☆583Updated last week
- High-performance Text-to-Speech server with OpenAI-compatible API, 8 voices, emotion tags, and modern web UI. Optimized for RTX GPUs.☆578Updated 3 months ago
- ☆272Updated last month
- Open source inference code for Rev's model☆432Updated 6 months ago
- Generate accurate transcripts using Apple's MLX framework☆441Updated 6 months ago
- 📋 NotebookMLX - An Open Source version of NotebookLM (Ported NotebookLlama)☆320Updated 7 months ago
- ☆982Updated last month
- ☆634Updated 2 months ago
- FastMLX is a high performance production ready API to host MLX models.☆332Updated 7 months ago
- Examples for Cerebrium Serverless GPUs☆512Updated last week
- Self-host the powerful Dia TTS model. This server offers a user-friendly Web UI, flexible API endpoints (incl. OpenAI compatible), suppor…☆325Updated 4 months ago