madroidmaq / mlx-omni-serverView external linksLinks
MLX Omni Server is a local inference server powered by Apple's MLX framework, specifically designed for Apple Silicon (M-series) chips. It implements OpenAI-compatible API endpoints, enabling seamless integration with existing OpenAI SDK clients while leveraging the power of local ML inference.
☆662Dec 21, 2025Updated last month
Alternatives and similar repositories for mlx-omni-server
Users that are interested in mlx-omni-server are comparing it to the libraries listed below
Sorting:
- MLX-VLM is a package for inference and fine-tuning of Vision Language Models (VLMs) on your Mac using MLX.☆2,135Updated this week
- FastMLX is a high performance production ready API to host MLX models.☆342Mar 18, 2025Updated 10 months ago
- A python package for serving LLM on OpenAI-compatible API endpoints with prompt caching using MLX.☆100Jun 29, 2025Updated 7 months ago
- Train Large Language Models on MLX.☆262Updated this week
- A high-performance API server that provides OpenAI-compatible endpoints for MLX models. Developed using Python and powered by the FastAPI…☆217Updated this week
- MLX native implementations of state-of-the-art generative image models☆1,807Feb 8, 2026Updated last week
- 🤖✨ChatMLX is a modern, open-source, high-performance chat application for MacOS based on large language models.☆819Mar 12, 2025Updated 11 months ago
- MLX-Embeddings is the best package for running Vision and Language Embedding models locally on your Mac using MLX.☆273Updated this week
- This repo maintains a 'cheat sheet' for LLMs that are undertrained on mlx☆18Mar 15, 2025Updated 11 months ago
- Minimal Claude Code alternative powered by MLX☆45Jan 11, 2026Updated last month
- MLX Model Manager unifies loading and inferencing with LLMs and VLMs.☆103Jan 30, 2025Updated last year
- The easiest way to run the fastest MLX-based LLMs locally☆313Oct 30, 2024Updated last year
- A command-line utility to manage MLX models between your Hugging Face cache and LM Studio.☆78Nov 11, 2025Updated 3 months ago
- An extremely fast implementation of whisper optimized for Apple Silicon using MLX.☆872May 8, 2024Updated last year
- Distributed Inference for mlx LLm☆100Aug 1, 2024Updated last year
- Start a server from the MLX library.☆198Jul 26, 2024Updated last year
- 📋 NotebookMLX - An Open Source version of NotebookLM (Ported NotebookLlama)☆338Mar 3, 2025Updated 11 months ago
- Implementation of F5-TTS in MLX☆606Mar 19, 2025Updated 10 months ago
- On-device Image Generation for Apple Silicon☆687Apr 11, 2025Updated 10 months ago
- Fast parallel LLM inference for MLX☆247Jul 7, 2024Updated last year
- An implementation of the CSM(Conversation Speech Model) for Apple Silicon using MLX.☆395Aug 15, 2025Updated 6 months ago
- A text-to-speech (TTS), speech-to-text (STT) and speech-to-speech (STS) library built on Apple's MLX framework, providing efficient speec…☆5,944Updated this week
- ☆197Mar 17, 2025Updated 10 months ago
- High-performance MLX-based LLM inference engine for macOS with native Swift implementation☆482Feb 9, 2026Updated last week
- Run LLMs with MLX☆3,650Updated this week
- MLX-GUI MLX Inference Server for Apple Silicone☆184Jan 13, 2026Updated last month
- ollama like cli tool for MLX models on huggingface (pull, rm, list, show, serve etc.)☆127Feb 5, 2026Updated last week
- Swift implementation of Flux.1 using mlx-swift☆113Aug 10, 2025Updated 6 months ago
- SiLLM simplifies the process of training and running Large Language Models (LLMs) on Apple Silicon by leveraging the MLX framework.☆284Jun 16, 2025Updated 8 months ago
- A collection of optimizers for MLX☆55Dec 12, 2025Updated 2 months ago
- 🧠 Retrieval Augmented Generation (RAG) example☆19Aug 18, 2025Updated 5 months ago
- A simple MLX implementation for pretraining LLMs on Apple Silicon.☆85Aug 20, 2025Updated 5 months ago
- MLX Transformers is a library that provides model implementation in MLX. It uses a similar model interface as HuggingFace Transformers an…☆73Nov 19, 2024Updated last year
- Optimized Ollama LLM server configuration for Mac Studio and other Apple Silicon Macs. Headless setup with automatic startup, resource op…☆282Jan 24, 2026Updated 3 weeks ago
- LM Studio Apple MLX engine☆890Updated this week
- Large Language Models (LLMs) applications and tools running on Apple Silicon in real-time with Apple MLX.☆458Jan 29, 2025Updated last year
- Examples in the MLX framework☆8,238Updated this week
- Generate accurate transcripts using Apple's MLX framework☆448Apr 26, 2025Updated 9 months ago
- Gradio chat interface for FastMLX☆12Sep 22, 2024Updated last year