cubist38 / mlx-openai-serverLinks
A high-performance API server that provides OpenAI-compatible endpoints for MLX models. Developed using Python and powered by the FastAPI framework, it provides an efficient, scalable, and user-friendly solution for running MLX-based vision and language models locally with an OpenAI-compatible interface.
☆175Updated this week
Alternatives and similar repositories for mlx-openai-server
Users that are interested in mlx-openai-server are comparing it to the libraries listed below
Sorting:
- FastMLX is a high performance production ready API to host MLX models.☆339Updated 9 months ago
- MLX-Embeddings is the best package for running Vision and Language Embedding models locally on your Mac using MLX.☆244Updated 2 months ago
- Train Large Language Models on MLX.☆239Updated last month
- Phi-3.5 for Mac: Locally-run Vision and Language Models for Apple Silicon☆274Updated 2 months ago
- A python package for serving LLM on OpenAI-compatible API endpoints with prompt caching using MLX.☆99Updated 6 months ago
- ollama like cli tool for MLX models on huggingface (pull, rm, list, show, serve etc.)☆121Updated this week
- A simple Jupyter Notebook for learning MLX text-completion fine-tuning!☆122Updated last year
- MLX-GUI MLX Inference Server for Apple Silicone☆162Updated 3 weeks ago
- Qwen Image models through MPS☆249Updated last week
- MLX Omni Server is a local inference server powered by Apple's MLX framework, specifically designed for Apple Silicon (M-series) chips. I…☆631Updated 2 weeks ago
- Distributed Inference for mlx LLm☆99Updated last year
- Your gateway to both Ollama & Apple MlX models☆150Updated 10 months ago
- For inferring and serving local LLMs using the MLX framework☆109Updated last year
- SiLLM simplifies the process of training and running Large Language Models (LLMs) on Apple Silicon by leveraging the MLX framework.☆286Updated 6 months ago
- Explore a simple example of utilizing MLX for RAG application running locally on your Apple Silicon device.☆179Updated last year
- API Server for Transformer Lab☆82Updated last month
- Fast parallel LLM inference for MLX☆241Updated last year
- Enhancing LLMs with LoRA☆204Updated 2 months ago
- A pure MLX-based training pipeline for fine-tuning LLMs using GRPO on Apple Silicon.☆223Updated 2 months ago
- Train embedding and reranker models for retrieval tasks on Apple Silicon with MLX☆172Updated 3 months ago
- Start a server from the MLX library.☆196Updated last year
- An implementation of the CSM(Conversation Speech Model) for Apple Silicon using MLX.☆392Updated 4 months ago
- A command-line utility to manage MLX models between your Hugging Face cache and LM Studio.☆73Updated 2 months ago
- Find the hidden meaning of LLMs☆38Updated last month
- Lightweight Vision native Multimodal Document Agent☆154Updated 4 months ago
- powerful and fast tool calling agents☆79Updated 9 months ago
- Generate train.jsonl and valid.jsonl files to use for fine-tuning Mistral and other LLMs.☆96Updated last year
- A wannabe Ollama equivalent for Apple MlX models☆81Updated 10 months ago
- This repo maintains a 'cheat sheet' for LLMs that are undertrained on mlx☆18Updated 9 months ago
- GenAI & agent toolkit for Apple Silicon Mac, implementing JSON schema-steered structured output (3SO) and tool-calling in Python. For mor…☆130Updated last month