MLX Transformers is a library that provides model implementation in MLX. It uses a similar model interface as HuggingFace Transformers and provides a way to load and run models in Apple Silicon devices.
☆74Nov 19, 2024Updated last year
Alternatives and similar repositories for mlx-transformers
Users that are interested in mlx-transformers are comparing it to the libraries listed below
Sorting:
- 🧠 Retrieval Augmented Generation (RAG) example☆19Feb 19, 2026Updated last week
- Gradio chat interface for FastMLX☆12Sep 22, 2024Updated last year
- Very basic framework for composable parameterized large language model (Q)LoRA / (Q)Dora fine-tuning using mlx, mlx_lm, and OgbujiPT.☆43Jun 20, 2025Updated 8 months ago
- Port of Andrej Karpathy's nanoGPT to Apple MLX framework.☆117Feb 12, 2024Updated 2 years ago
- SiLLM simplifies the process of training and running Large Language Models (LLMs) on Apple Silicon by leveraging the MLX framework.☆284Jun 16, 2025Updated 8 months ago
- An example implementation of RLHF (or, more accurately, RLAIF) built on MLX and HuggingFace.☆38Jun 21, 2024Updated last year
- Shared personal notes created while working with the Apple MLX machine learning framework☆24Dec 12, 2025Updated 2 months ago
- FastMLX is a high performance production ready API to host MLX models.☆346Mar 18, 2025Updated 11 months ago
- A simple UI / Web / Frontend for MLX mlx-lm using Streamlit.☆260Oct 25, 2025Updated 4 months ago
- MLX Image Models☆24Mar 14, 2024Updated last year
- Examples for using the SiLLM framework for training and running Large Language Models (LLMs) on Apple Silicon☆16May 8, 2025Updated 9 months ago
- mlx image models for Apple Silicon machines☆91Nov 30, 2025Updated 3 months ago
- Your gateway to both Ollama & Apple MlX models☆150Mar 2, 2025Updated last year
- Chat with MLX is a high-performance macOS application that connects your local documents to a personalized large language model (LLM).☆178Mar 8, 2024Updated last year
- A simple Jupyter Notebook for learning MLX text-completion fine-tuning!☆124Nov 10, 2024Updated last year
- Minimal, clean code implementation of RAG with mlx using gguf model weights☆53Apr 27, 2024Updated last year
- A chatbot UI for RAG, multimodal, text completion. (support Transformers, llama.cpp, MLX, vLLM)☆20Apr 18, 2024Updated last year
- Roberta Question Answering using MLX.☆24Feb 22, 2026Updated last week
- Generate train.jsonl and valid.jsonl files to use for fine-tuning Mistral and other LLMs.☆97Feb 5, 2024Updated 2 years ago
- Distributed Inference for mlx LLm☆100Aug 1, 2024Updated last year
- For inferring and serving local LLMs using the MLX framework☆110Mar 24, 2024Updated last year
- A fast minimalistic implementation of guided generation on Apple Silicon using Outlines and MLX☆59Feb 9, 2024Updated 2 years ago
- Introduction to MLX for Swift developers☆45Jun 23, 2025Updated 8 months ago
- A python package for serving LLM on OpenAI-compatible API endpoints with prompt caching using MLX.☆101Jun 29, 2025Updated 8 months ago
- MLX implementation of GCN, with benchmark on MPS, CUDA and CPU (M1 Pro, M2 Ultra, M3 Max).☆25Dec 16, 2023Updated 2 years ago
- ☆92Jan 24, 2025Updated last year
- Fast parallel LLM inference for MLX☆247Jul 7, 2024Updated last year
- run embeddings in MLX☆97Sep 27, 2024Updated last year
- Large Language Models (LLMs) applications and tools running on Apple Silicon in real-time with Apple MLX.☆459Jan 29, 2025Updated last year
- KAN (Kolmogorov–Arnold Networks) in the MLX framework for Apple Silicon☆31Jun 18, 2025Updated 8 months ago
- ☆11Aug 26, 2024Updated last year
- Triton‑style kernel toolkit for MLX plus a small upstream incubator: prototype, benchmark, and upstream fusions for Apple Silicon☆36Updated this week
- Explore a simple example of utilizing MLX for RAG application running locally on your Apple Silicon device.☆178Jan 31, 2024Updated 2 years ago
- A swarm of LLM agents that will help you test, document, and productionize your code!☆16Feb 16, 2026Updated 2 weeks ago
- A tiny server to run local inference on MLX model in the style of OpenAI☆13Jan 31, 2024Updated 2 years ago
- Condensing codebases to a single file for usage in long context LLMs (Gemini 1.5 Pro, GPT-4-Turbo, Claude Opus)☆13Apr 1, 2024Updated last year
- interact with your robot in JS, inspired by LeRobot☆36Nov 14, 2025Updated 3 months ago
- ☆15May 17, 2024Updated last year
- Benchmark of Apple MLX operations on all Apple Silicon chips (GPU, CPU) + MPS and CUDA.☆216Jan 4, 2026Updated 2 months ago