madroidmaq / mlx-omni-server
MLX Omni Server is a local inference server powered by Apple's MLX framework, specifically designed for Apple Silicon (M-series) chips. It implements OpenAI-compatible API endpoints, enabling seamless integration with existing OpenAI SDK clients while leveraging the power of local ML inference.
☆301Updated last week
Alternatives and similar repositories for mlx-omni-server:
Users that are interested in mlx-omni-server are comparing it to the libraries listed below
- FastMLX is a high performance production ready API to host MLX models.☆288Updated 3 weeks ago
- A text-to-speech (TTS) and Speech-to-Speech (STS) library built on Apple's MLX framework, providing efficient speech synthesis on Apple S…☆463Updated this week
- An implementation of the CSM(Conversation Speech Model) for Apple Silicon using MLX.☆291Updated this week
- ☆159Updated last month
- MLX-Embeddings is the best package for running Vision and Language Embedding models locally on your Mac using MLX.☆136Updated this week
- Run LLMs with MLX☆394Updated this week
- The easiest way to run the fastest MLX-based LLMs locally☆277Updated 5 months ago
- Blazing fast whisper turbo for ASR (speech-to-text) tasks☆203Updated 5 months ago
- Apple MLX engine for LM Studio☆506Updated this week
- SiLLM simplifies the process of training and running Large Language Models (LLMs) on Apple Silicon by leveraging the MLX framework.☆262Updated last week
- Chat with MLX is a high-performance macOS application that connects your local documents to a personalized large language model (LLM).☆173Updated last year
- Fast parallel LLM inference for MLX☆179Updated 9 months ago
- ☆272Updated this week
- Phi-3.5 for Mac: Locally-run Vision and Language Models for Apple Silicon☆265Updated 7 months ago
- Large Language Models (LLMs) applications and tools running on Apple Silicon in real-time with Apple MLX.☆435Updated 2 months ago
- Optimized Ollama LLM server configuration for Mac Studio and other Apple Silicon Macs. Headless setup with automatic startup, resource op…☆158Updated last month
- An extremely fast implementation of whisper optimized for Apple Silicon using MLX.☆686Updated 11 months ago
- 📋 NotebookMLX - An Open Source version of NotebookLM (Ported NotebookLlama)☆273Updated last month
- 🤖✨ChatMLX is a modern, open-source, high-performance chat application for MacOS based on large language models.☆771Updated last month
- A simple UI / Web / Frontend for MLX mlx-lm using Streamlit.☆249Updated 2 months ago
- Implementation of F5-TTS in MLX☆517Updated 3 weeks ago
- MLX Model Manager unifies loading and inferencing with LLMs and VLMs.☆86Updated 2 months ago
- MLX-VLM is a package for inference and fine-tuning of Vision Language Models (VLMs) on your Mac using MLX.☆1,155Updated this week
- Explore a simple example of utilizing MLX for RAG application running locally on your Apple Silicon device.☆168Updated last year
- Start a server from the MLX library.☆182Updated 8 months ago
- On-device Image Generation for Apple Silicon☆612Updated this week
- ☆87Updated 2 weeks ago
- Your gateway to both Ollama & Apple MlX models☆120Updated last month
- A python package for serving LLM on OpenAI-compatible API endpoints with prompt caching using MLX.☆76Updated 4 months ago
- ☆74Updated 4 months ago