A python package for serving LLM on OpenAI-compatible API endpoints with prompt caching using MLX.
☆103Jun 29, 2025Updated 10 months ago
Alternatives and similar repositories for MLX-Textgen
Users that are interested in MLX-Textgen are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- For inferring and serving local LLMs using the MLX framework☆114Mar 24, 2024Updated 2 years ago
- MLX Omni Server is a local inference server powered by Apple's MLX framework, specifically designed for Apple Silicon (M-series) chips. I…☆708Mar 10, 2026Updated last month
- A little file for doing LLM-assisted prompt expansion and image generation using Flux.schnell - complete with prompt history, prompt queu…☆26Aug 16, 2024Updated last year
- A simple Jupyter Notebook for learning MLX text-completion fine-tuning!☆124Nov 10, 2024Updated last year
- Fast parallel LLM inference for MLX☆249Jul 7, 2024Updated last year
- GPUs on demand by Runpod - Special Offer Available • AdRun AI, ML, and HPC workloads on powerful cloud GPUs—without limits or wasted spend. Deploy GPUs in under a minute and pay by the second.
- An example implementation of RLHF (or, more accurately, RLAIF) built on MLX and HuggingFace.☆38Jun 21, 2024Updated last year
- Very basic framework for composable parameterized large language model (Q)LoRA / (Q)Dora fine-tuning using mlx, mlx_lm, and OgbujiPT.☆42Jun 20, 2025Updated 10 months ago
- ☆21Oct 9, 2024Updated last year
- Roberta Question Answering using MLX.☆24Feb 22, 2026Updated 2 months ago
- SiLLM simplifies the process of training and running Large Language Models (LLMs) on Apple Silicon by leveraging the MLX framework.☆286Jun 16, 2025Updated 10 months ago
- Gradio chat interface for FastMLX☆12Sep 22, 2024Updated last year
- This repo maintains a 'cheat sheet' for LLMs that are undertrained on mlx☆33Mar 12, 2026Updated last month
- 🧠 Retrieval Augmented Generation (RAG) example☆19Apr 17, 2026Updated 2 weeks ago
- 🤖✨ChatMLX is a modern, open-source, high-performance chat application for MacOS based on large language models.☆826Mar 12, 2025Updated last year
- Managed Kubernetes at scale on DigitalOcean • AdDigitalOcean Kubernetes includes the control plane, bandwidth allowance, container registry, automatic updates, and more for free.
- Generate train.jsonl and valid.jsonl files to use for fine-tuning Mistral and other LLMs.☆97Feb 5, 2024Updated 2 years ago
- Minimal Claude Code alternative powered by MLX☆46Jan 11, 2026Updated 3 months ago
- A CLI in Rust to generate synthetic data for MLX friendly training☆25Jan 13, 2024Updated 2 years ago
- Chat²GPT is a ChatGPT (and DALL·E 2/3, and ElevenLabs) chat bot for Google Chat. 🤖💬☆11Feb 2, 2026Updated 3 months ago
- MLX-Embeddings is the best package for running Vision and Language Embedding models locally on your Mac using MLX.☆360Apr 24, 2026Updated last week
- A tiny server to run local inference on MLX model in the style of OpenAI☆13Jan 31, 2024Updated 2 years ago
- An OpenAI API compatible LLM inference server based on ExLlamaV2.☆25Feb 9, 2024Updated 2 years ago
- ☆223Jan 23, 2025Updated last year
- o1lama: Use Ollama with Llama 3.2 3B and other models locally to create reasoning chains that are similar in appearance to OpenAI's o1.☆22Jun 1, 2025Updated 11 months ago
- GPU virtual machines on DigitalOcean Gradient AI • AdGet to production fast with high-performance AMD and NVIDIA GPUs you can spin up in seconds. The definition of operational simplicity.
- GenAI & agent toolkit for Apple Silicon Mac, implementing JSON schema-steered structured output (3SO) and tool-calling in Python. For mor…☆135Feb 27, 2026Updated 2 months ago
- Introduction to MLX for Swift developers☆46Jun 23, 2025Updated 10 months ago
- Scripts to create your own moe models using mlx☆89Feb 26, 2024Updated 2 years ago
- Examples for using the SiLLM framework for training and running Large Language Models (LLMs) on Apple Silicon☆16May 8, 2025Updated 11 months ago
- MLX-VLM is a package for inference and fine-tuning of Vision Language Models (VLMs) on your Mac using MLX.☆4,573Updated this week
- ☆49Mar 17, 2026Updated last month
- mlx image models for Apple Silicon machines☆95Apr 8, 2026Updated 3 weeks ago
- run embeddings in MLX☆98Sep 27, 2024Updated last year
- CLI tool for text to image generation using the FLUX.1 model.☆67Jun 28, 2025Updated 10 months ago
- Managed Database hosting by DigitalOcean • AdPostgreSQL, MySQL, MongoDB, Kafka, Valkey, and OpenSearch available. Automatically scale up storage and focus on building your apps.
- This is a FastAPI based LLM server. Load multiple LLM models (MLX or llama.cpp) simultaneously using multiprocessing.☆17Apr 8, 2026Updated 3 weeks ago
- On-device Image Generation for Apple Silicon☆700Apr 11, 2025Updated last year
- MLX native implementations of state-of-the-art generative image models☆2,037Apr 10, 2026Updated 3 weeks ago
- Transcribe and summarize videos using whisper and llms on apple mlx framework☆80Jan 28, 2024Updated 2 years ago
- Start a server from the MLX library.☆199Jul 26, 2024Updated last year
- The easiest way to run the fastest MLX-based LLMs locally☆323Oct 30, 2024Updated last year
- ☆23Sep 19, 2024Updated last year