cubist38 / mlx-openai-serverView external linksLinks
A high-performance API server that provides OpenAI-compatible endpoints for MLX models. Developed using Python and powered by the FastAPI framework, it provides an efficient, scalable, and user-friendly solution for running MLX-based vision and language models locally with an OpenAI-compatible interface.
☆217Updated this week
Alternatives and similar repositories for mlx-openai-server
Users that are interested in mlx-openai-server are comparing it to the libraries listed below
Sorting:
- ollama like cli tool for MLX models on huggingface (pull, rm, list, show, serve etc.)☆127Feb 5, 2026Updated last week
- MLX Omni Server is a local inference server powered by Apple's MLX framework, specifically designed for Apple Silicon (M-series) chips. I…☆661Dec 21, 2025Updated last month
- Minimal Claude Code alternative powered by MLX☆45Jan 11, 2026Updated last month
- High-performance MLX-based LLM inference engine for macOS with native Swift implementation☆478Jan 19, 2026Updated 3 weeks ago
- FastMLX is a high performance production ready API to host MLX models.☆342Mar 18, 2025Updated 10 months ago
- MLX-GUI MLX Inference Server for Apple Silicone☆184Jan 13, 2026Updated last month
- ☆22Aug 1, 2025Updated 6 months ago
- ☆14Dec 6, 2025Updated 2 months ago
- javascript multivariate data visualization☆14Jan 10, 2017Updated 9 years ago
- A collection of optimizers for MLX☆55Dec 12, 2025Updated 2 months ago
- Explore Building Computer Use Agents with Gemini 2.0☆19Dec 12, 2024Updated last year
- Run Claude Code with Local MLX powered models☆25Jan 10, 2026Updated last month
- Train Large Language Models on MLX.☆258Updated this week
- Instant Perfect Native MacOS Transcription☆52Jul 26, 2025Updated 6 months ago
- utility to create xast trees☆13Jul 31, 2023Updated 2 years ago
- CLI for Recursive Language Models☆42Jan 28, 2026Updated 2 weeks ago
- Real-time webcam demo with SmolVLM(mlx-community/SmolVLM-Instruct-4bit) and MLX-VLM☆25Jun 12, 2025Updated 8 months ago
- Run LLMs with MLX☆3,577Feb 7, 2026Updated last week
- A pure MLX-based training pipeline for fine-tuning LLMs using GRPO on Apple Silicon.☆228Oct 28, 2025Updated 3 months ago
- A command-line utility to manage MLX models between your Hugging Face cache and LM Studio.☆78Nov 11, 2025Updated 3 months ago
- MCP server providing kanban-based task management memory for complex multi-session workflows with AI agents☆33Jul 13, 2025Updated 7 months ago
- Smaller and faster nanochat in MLX☆36Nov 15, 2025Updated 2 months ago
- ☆14Dec 17, 2025Updated last month
- Implementation of ModernBERT in MLX☆20Jan 7, 2026Updated last month
- Introduction to MLX for Swift developers☆45Jun 23, 2025Updated 7 months ago
- SiLLM simplifies the process of training and running Large Language Models (LLMs) on Apple Silicon by leveraging the MLX framework.☆284Jun 16, 2025Updated 7 months ago
- Fast parallel LLM inference for MLX☆246Jul 7, 2024Updated last year
- MLX-VLM is a package for inference and fine-tuning of Vision Language Models (VLMs) on your Mac using MLX.☆2,135Updated this week
- Sample project for F5-TTS using MLX Swift☆50Jan 15, 2026Updated 3 weeks ago
- A framework for orchestrating AI agents using a mermaid graph☆76May 16, 2024Updated last year
- MLX-Embeddings is the best package for running Vision and Language Embedding models locally on your Mac using MLX.☆273Updated this week
- A proxy for minimax-m2, enabling interleaved thinking, and tool calls.☆38Nov 21, 2025Updated 2 months ago
- Memory Agent monorepo☆81Oct 9, 2025Updated 4 months ago
- Portable prebuilt binaries of React Native, wrapped as a SwiftPM Package☆23Oct 3, 2024Updated last year
- Hyperparam local dataset viewer☆27Updated this week
- Secure VSCode communication channel designed for voice coding☆22Jan 25, 2025Updated last year
- ☆21Oct 9, 2024Updated last year
- KAN (Kolmogorov–Arnold Networks) in the MLX framework for Apple Silicon☆31Jun 18, 2025Updated 7 months ago
- A Model Context Protocol (MCP) server that provides file system context to Large Language Models (LLMs). This server enables LLMs to read…☆35Jul 10, 2025Updated 7 months ago