madroidmaq / mlx-omni-serverLinks
MLX Omni Server is a local inference server powered by Apple's MLX framework, specifically designed for Apple Silicon (M-series) chips. It implements OpenAI-compatible API endpoints, enabling seamless integration with existing OpenAI SDK clients while leveraging the power of local ML inference.
☆399Updated this week
Alternatives and similar repositories for mlx-omni-server
Users that are interested in mlx-omni-server are comparing it to the libraries listed below
Sorting:
- FastMLX is a high performance production ready API to host MLX models.☆305Updated 2 months ago
- An implementation of the CSM(Conversation Speech Model) for Apple Silicon using MLX.☆344Updated 2 weeks ago
- MLX-Embeddings is the best package for running Vision and Language Embedding models locally on your Mac using MLX.☆161Updated this week
- The easiest way to run the fastest MLX-based LLMs locally☆282Updated 7 months ago
- Apple MLX engine for LM Studio☆564Updated last week
- ☆173Updated 2 months ago
- Run LLMs with MLX☆836Updated this week
- SiLLM simplifies the process of training and running Large Language Models (LLMs) on Apple Silicon by leveraging the MLX framework.☆267Updated this week
- Blazing fast whisper turbo for ASR (speech-to-text) tasks☆208Updated 7 months ago
- An implementation of the Nvidia's Parakeet models for Apple Silicon using MLX.☆232Updated this week
- Phi-3.5 for Mac: Locally-run Vision and Language Models for Apple Silicon☆265Updated 8 months ago
- MLX-VLM is a package for inference and fine-tuning of Vision Language Models (VLMs) on your Mac using MLX.☆1,293Updated this week
- Large Language Models (LLMs) applications and tools running on Apple Silicon in real-time with Apple MLX.☆443Updated 4 months ago
- Train Large Language Models on MLX.☆68Updated last week
- Optimized Ollama LLM server configuration for Mac Studio and other Apple Silicon Macs. Headless setup with automatic startup, resource op…☆182Updated 2 months ago
- MLX Model Manager unifies loading and inferencing with LLMs and VLMs.☆93Updated 4 months ago
- An extremely fast implementation of whisper optimized for Apple Silicon using MLX.☆706Updated last year
- Your gateway to both Ollama & Apple MlX models☆134Updated 2 months ago
- 🤖✨ChatMLX is a modern, open-source, high-performance chat application for MacOS based on large language models.☆788Updated 2 months ago
- Explore a simple example of utilizing MLX for RAG application running locally on your Apple Silicon device.☆170Updated last year
- ☆282Updated last month
- On-device Image Generation for Apple Silicon☆617Updated last month
- Start a server from the MLX library.☆187Updated 10 months ago
- Fast parallel LLM inference for MLX☆188Updated 10 months ago
- 📋 NotebookMLX - An Open Source version of NotebookLM (Ported NotebookLlama)☆289Updated 2 months ago
- A simple UI / Web / Frontend for MLX mlx-lm using Streamlit.☆253Updated 4 months ago
- Chat with MLX is a high-performance macOS application that connects your local documents to a personalized large language model (LLM).☆174Updated last year
- Implementation of F5-TTS in MLX☆541Updated 2 months ago
- A python package for serving LLM on OpenAI-compatible API endpoints with prompt caching using MLX.☆83Updated 5 months ago
- Claude Deep Research config for Claude Code.☆176Updated 2 months ago