mustafaaljadery / lightning-whisper-mlx
An extremely fast implementation of whisper optimized for Apple Silicon using MLX.
☆685Updated 11 months ago
Alternatives and similar repositories for lightning-whisper-mlx:
Users that are interested in lightning-whisper-mlx are comparing it to the libraries listed below
- Implementation of F5-TTS in MLX☆517Updated 3 weeks ago
- FastMLX is a high performance production ready API to host MLX models.☆288Updated 3 weeks ago
- Large Language Models (LLMs) applications and tools running on Apple Silicon in real-time with Apple MLX.☆435Updated 2 months ago
- MLX-VLM is a package for inference and fine-tuning of Vision Language Models (VLMs) on your Mac using MLX.☆1,155Updated this week
- Whisper with Medusa heads☆830Updated last month
- MLX Omni Server is a local inference server powered by Apple's MLX framework, specifically designed for Apple Silicon (M-series) chips. I…☆301Updated this week
- Run LLMs with MLX☆366Updated this week
- Apple MLX engine for LM Studio☆499Updated this week
- A text-to-speech (TTS) and Speech-to-Speech (STS) library built on Apple's MLX framework, providing efficient speech synthesis on Apple S…☆429Updated last week
- An implementation of the CSM(Conversation Speech Model) for Apple Silicon using MLX.☆266Updated this week
- Blazing fast whisper turbo for ASR (speech-to-text) tasks☆202Updated 5 months ago
- On-device Image Generation for Apple Silicon☆611Updated this week
- SiLLM simplifies the process of training and running Large Language Models (LLMs) on Apple Silicon by leveraging the MLX framework.☆262Updated this week
- Mac compatible Ollama Voice☆474Updated last year
- Python tools for WhisperKit: Model conversion, optimization and evaluation☆211Updated 2 months ago
- Local voice chatbot for engaging conversations, powered by Ollama, Hugging Face Transformers, and Coqui TTS Toolkit☆758Updated 8 months ago
- Phi-3.5 for Mac: Locally-run Vision and Language Models for Apple Silicon☆265Updated 7 months ago
- A simple UI / Web / Frontend for MLX mlx-lm using Streamlit.☆247Updated 2 months ago
- ☆159Updated 3 weeks ago
- Generate accurate transcripts using Apple's MLX framework☆390Updated 3 weeks ago
- 🤖✨ChatMLX is a modern, open-source, high-performance chat application for MacOS based on large language models.☆768Updated last month
- The easiest way to run the fastest MLX-based LLMs locally☆271Updated 5 months ago
- Start a server from the MLX library.☆182Updated 8 months ago
- Stateful load balancer custom-tailored for llama.cpp 🏓🦙☆737Updated last week
- WhisperFusion builds upon the capabilities of WhisperLive and WhisperSpeech to provide a seamless conversations with an AI.☆1,591Updated 8 months ago
- Explore a simple example of utilizing MLX for RAG application running locally on your Apple Silicon device.☆167Updated last year
- Minimal extension of OpenAI's Whisper adding speaker diarization with special tokens☆487Updated last year
- From anywhere you can type, query and stream the output of an LLM or any other script☆493Updated last year
- Suno AI's Bark model in C/C++ for fast text-to-speech generation☆796Updated 4 months ago
- Fast parallel LLM inference for MLX☆178Updated 9 months ago