cubist38 / mlx-openai-serverLinks
A high-performance API server that provides OpenAI-compatible endpoints for MLX models. Developed using Python and powered by the FastAPI framework, it provides an efficient, scalable, and user-friendly solution for running MLX-based vision and language models locally with an OpenAI-compatible interface.
☆153Updated last week
Alternatives and similar repositories for mlx-openai-server
Users that are interested in mlx-openai-server are comparing it to the libraries listed below
Sorting:
- Train Large Language Models on MLX.☆232Updated last week
- MLX-Embeddings is the best package for running Vision and Language Embedding models locally on your Mac using MLX.☆236Updated last month
- FastMLX is a high performance production ready API to host MLX models.☆337Updated 9 months ago
- MLX-GUI MLX Inference Server for Apple Silicone☆157Updated this week
- ollama like cli tool for MLX models on huggingface (pull, rm, list, show, serve etc.)☆120Updated this week
- Distributed Inference for mlx LLm☆99Updated last year
- A python package for serving LLM on OpenAI-compatible API endpoints with prompt caching using MLX.☆99Updated 5 months ago
- MLX Omni Server is a local inference server powered by Apple's MLX framework, specifically designed for Apple Silicon (M-series) chips. I…☆624Updated 2 months ago
- Find the hidden meaning of LLMs☆39Updated last month
- A pure MLX-based training pipeline for fine-tuning LLMs using GRPO on Apple Silicon.☆219Updated last month
- A command-line utility to manage MLX models between your Hugging Face cache and LM Studio.☆68Updated last month
- GenAI & agent toolkit for Apple Silicon Mac, implementing JSON schema-steered structured output (3SO) and tool-calling in Python. For mor…☆129Updated last week
- SiLLM simplifies the process of training and running Large Language Models (LLMs) on Apple Silicon by leveraging the MLX framework.☆285Updated 6 months ago
- Phi-3.5 for Mac: Locally-run Vision and Language Models for Apple Silicon☆275Updated last month
- Blazing fast whisper turbo for ASR (speech-to-text) tasks☆217Updated last month
- Fast parallel LLM inference for MLX☆235Updated last year
- Qwen Image models through MPS☆244Updated last month
- Start a server from the MLX library.☆195Updated last year
- This repo maintains a 'cheat sheet' for LLMs that are undertrained on mlx☆18Updated 9 months ago
- Explore a simple example of utilizing MLX for RAG application running locally on your Apple Silicon device.☆179Updated last year
- For inferring and serving local LLMs using the MLX framework☆109Updated last year
- An implementation of the CSM(Conversation Speech Model) for Apple Silicon using MLX.☆388Updated 4 months ago
- A simple Jupyter Notebook for learning MLX text-completion fine-tuning!☆122Updated last year
- Your gateway to both Ollama & Apple MlX models☆150Updated 9 months ago
- Train embedding and reranker models for retrieval tasks on Apple Silicon with MLX☆168Updated 3 months ago
- ☆195Updated 9 months ago
- llmbasedos — Local-First OS Where Your AI Agents Wake Up and Work☆278Updated 4 months ago
- Enhancing LLMs with LoRA☆193Updated 2 months ago
- Lightweight Vision native Multimodal Document Agent☆155Updated 3 months ago
- Transplants vocabulary between language models, enabling the creation of draft models for speculative decoding WITHOUT retraining.☆47Updated last month