JosefAlbers / whisper-turbo-mlxLinks
Blazing fast whisper turbo for ASR (speech-to-text) tasks
☆212Updated 8 months ago
Alternatives and similar repositories for whisper-turbo-mlx
Users that are interested in whisper-turbo-mlx are comparing it to the libraries listed below
Sorting:
- Phi-3.5 for Mac: Locally-run Vision and Language Models for Apple Silicon☆271Updated 10 months ago
- An implementation of the Nvidia's Parakeet models for Apple Silicon using MLX.☆357Updated last week
- FastMLX is a high performance production ready API to host MLX models.☆311Updated 3 months ago
- MLX-Embeddings is the best package for running Vision and Language Embedding models locally on your Mac using MLX.☆179Updated last month
- 📋 NotebookMLX - An Open Source version of NotebookLM (Ported NotebookLlama)☆303Updated 4 months ago
- An implementation of the CSM(Conversation Speech Model) for Apple Silicon using MLX.☆364Updated 2 months ago
- Fast Streaming TTS with Orpheus + WebRTC (with FastRTC)☆298Updated 3 months ago
- MLX-GUI MLX Inference Server☆69Updated this week
- A real-time speech-to-speech chatbot powered by Whisper Small, Llama 3.2, and Kokoro-82M.☆232Updated 5 months ago
- Python tools for WhisperKit: Model conversion, optimization and evaluation☆219Updated last week
- Distributed Inference for mlx LLm☆93Updated 11 months ago
- GenAI & agent toolkit for Apple Silicon Mac, implementing JSON schema-steered structured output (3SO) and tool-calling in Python. For mor…☆128Updated last month
- SiLLM simplifies the process of training and running Large Language Models (LLMs) on Apple Silicon by leveraging the MLX framework.☆273Updated 3 weeks ago
- Implementation of F5-TTS in MLX☆561Updated 3 months ago
- The easiest way to run the fastest MLX-based LLMs locally☆289Updated 8 months ago
- Transcribe and summarize videos using whisper and llms on apple mlx framework☆75Updated last year
- For LLMs to better code with Jina API☆158Updated 2 weeks ago
- The Moshi speech-to-speech model, deployed to Modal with a realtime CLI chat☆57Updated 9 months ago
- MLX Omni Server is a local inference server powered by Apple's MLX framework, specifically designed for Apple Silicon (M-series) chips. I…☆437Updated this week
- Turn text from websites into spoken audio with edge-tts, F5, etc. and save as mp3 files☆47Updated 2 weeks ago
- Generate accurate transcripts using Apple's MLX framework☆425Updated 2 months ago
- A python package for serving LLM on OpenAI-compatible API endpoints with prompt caching using MLX.☆89Updated 2 weeks ago
- Train Large Language Models on MLX.☆126Updated this week
- For inferring and serving local LLMs using the MLX framework☆104Updated last year
- Port of Suno's Bark TTS transformer in Apple's MLX Framework☆83Updated last year
- Dagger functions to import Hugging Face GGUF models into a local ollama instance and optionally push them to ollama.com.☆116Updated last year
- Examples on how to use various LLM providers with a Wine Classification problem☆96Updated 3 weeks ago
- ☆285Updated last year
- A little file for doing LLM-assisted prompt expansion and image generation using Flux.schnell - complete with prompt history, prompt queu…☆26Updated 10 months ago
- Start a server from the MLX library.☆188Updated 11 months ago