cubist38 / mlx-openai-serverLinks
A high-performance API server that provides OpenAI-compatible endpoints for MLX models. Developed using Python and powered by the FastAPI framework, it provides an efficient, scalable, and user-friendly solution for running MLX-based vision and language models locally with an OpenAI-compatible interface.
☆110Updated this week
Alternatives and similar repositories for mlx-openai-server
Users that are interested in mlx-openai-server are comparing it to the libraries listed below
Sorting:
- FastMLX is a high performance production ready API to host MLX models.☆331Updated 6 months ago
- Train Large Language Models on MLX.☆183Updated last week
- MLX-Embeddings is the best package for running Vision and Language Embedding models locally on your Mac using MLX.☆210Updated last month
- MLX Omni Server is a local inference server powered by Apple's MLX framework, specifically designed for Apple Silicon (M-series) chips. I…☆573Updated last month
- ollama like cli tool for MLX models on huggingface (pull, rm, list, show, serve etc.)☆103Updated last week
- A python package for serving LLM on OpenAI-compatible API endpoints with prompt caching using MLX.☆95Updated 3 months ago
- MLX-GUI MLX Inference Server for Apple Silicone☆124Updated last month
- SiLLM simplifies the process of training and running Large Language Models (LLMs) on Apple Silicon by leveraging the MLX framework.☆280Updated 3 months ago
- A command-line utility to manage MLX models between your Hugging Face cache and LM Studio.☆63Updated 7 months ago
- Guaranteed Structured Output from any Language Model via Hierarchical State Machines☆146Updated last week
- Qwen Image models through MPS☆212Updated 2 weeks ago
- Phi-3.5 for Mac: Locally-run Vision and Language Models for Apple Silicon☆272Updated last year
- Find the hidden meaning of LLMs☆27Updated 2 months ago
- API Server for Transformer Lab☆79Updated this week
- Start a server from the MLX library.☆192Updated last year
- llmbasedos — Local-First OS Where Your AI Agents Wake Up and Work☆279Updated last month
- Lightweight Vision native Multimodal Document Agent☆119Updated last month
- Distributed Inference for mlx LLm☆96Updated last year
- For inferring and serving local LLMs using the MLX framework☆109Updated last year
- A simple Jupyter Notebook for learning MLX text-completion fine-tuning!☆122Updated 10 months ago
- LM Studio Apple MLX engine☆790Updated last week
- Fast parallel LLM inference for MLX☆220Updated last year
- High-performance MLX-based LLM inference engine for macOS with native Swift implementation☆412Updated last week
- Blazing fast whisper turbo for ASR (speech-to-text) tasks☆217Updated 11 months ago
- Enhancing LLMs with LoRA☆159Updated 3 weeks ago
- The easiest way to run the fastest MLX-based LLMs locally☆302Updated 11 months ago
- A flexible, adaptive classification system for dynamic text classification☆463Updated 2 weeks ago
- An OpenAI API compatible API for chat with image input and questions about the images. aka Multimodal.☆259Updated 7 months ago
- chrome & firefox extension to chat with webpages: local llms☆126Updated 9 months ago
- Open Source Local Data Analysis Assistant.☆41Updated this week