SiLLM simplifies the process of training and running Large Language Models (LLMs) on Apple Silicon by leveraging the MLX framework.
☆284Jun 16, 2025Updated 8 months ago
Alternatives and similar repositories for SiLLM
Users that are interested in SiLLM are comparing it to the libraries listed below
Sorting:
- Examples for using the SiLLM framework for training and running Large Language Models (LLMs) on Apple Silicon☆16May 8, 2025Updated 9 months ago
- FastMLX is a high performance production ready API to host MLX models.☆345Mar 18, 2025Updated 11 months ago
- Fast parallel LLM inference for MLX☆247Jul 7, 2024Updated last year
- Large Language Models (LLMs) applications and tools running on Apple Silicon in real-time with Apple MLX.☆459Jan 29, 2025Updated last year
- MLX Transformers is a library that provides model implementation in MLX. It uses a similar model interface as HuggingFace Transformers an…☆74Nov 19, 2024Updated last year
- For inferring and serving local LLMs using the MLX framework☆110Mar 24, 2024Updated last year
- Chat with MLX is a high-performance macOS application that connects your local documents to a personalized large language model (LLM).☆178Mar 8, 2024Updated last year
- ☆199Mar 17, 2025Updated 11 months ago
- Very basic framework for composable parameterized large language model (Q)LoRA / (Q)Dora fine-tuning using mlx, mlx_lm, and OgbujiPT.☆43Jun 20, 2025Updated 8 months ago
- A simple Jupyter Notebook for learning MLX text-completion fine-tuning!☆124Nov 10, 2024Updated last year
- A simple UI / Web / Frontend for MLX mlx-lm using Streamlit.☆260Oct 25, 2025Updated 4 months ago
- A simple script to enhance text editing across your Mac, leveraging the power of MLX. Designed for seamless integration, it offers real-t…☆109Mar 4, 2024Updated last year
- A fast minimalistic implementation of guided generation on Apple Silicon using Outlines and MLX☆59Feb 9, 2024Updated 2 years ago
- A multi-platform SwiftUI frontend for running local LLMs with Apple's MLX framework.☆432Oct 27, 2024Updated last year
- Shared personal notes created while working with the Apple MLX machine learning framework☆24Dec 12, 2025Updated 2 months ago
- run embeddings in MLX☆97Sep 27, 2024Updated last year
- Minimal, clean code implementation of RAG with mlx using gguf model weights☆53Apr 27, 2024Updated last year
- mlx image models for Apple Silicon machines☆91Nov 30, 2025Updated 3 months ago
- Start a server from the MLX library.☆199Jul 26, 2024Updated last year
- Distributed Inference for mlx LLm☆100Aug 1, 2024Updated last year
- Scripts to create your own moe models using mlx☆90Feb 26, 2024Updated 2 years ago
- On-device Image Generation for Apple Silicon☆691Apr 11, 2025Updated 10 months ago
- An all-in-one LLMs Chat UI for Apple Silicon Mac using MLX Framework.☆1,588Sep 6, 2024Updated last year
- Phi-3.5 for Mac: Locally-run Vision and Language Models for Apple Silicon☆274Nov 9, 2025Updated 3 months ago
- MLX-VLM is a package for inference and fine-tuning of Vision Language Models (VLMs) on your Mac using MLX.☆2,177Updated this week
- A collection of optimizers for MLX☆57Dec 12, 2025Updated 2 months ago
- Generate train.jsonl and valid.jsonl files to use for fine-tuning Mistral and other LLMs.☆97Feb 5, 2024Updated 2 years ago
- Gradio chat interface for FastMLX☆12Sep 22, 2024Updated last year
- An example implementation of RLHF (or, more accurately, RLAIF) built on MLX and HuggingFace.☆38Jun 21, 2024Updated last year
- Explore a simple example of utilizing MLX for RAG application running locally on your Apple Silicon device.☆178Jan 31, 2024Updated 2 years ago
- MLX Model Manager unifies loading and inferencing with LLMs and VLMs.☆103Jan 30, 2025Updated last year
- A python package for serving LLM on OpenAI-compatible API endpoints with prompt caching using MLX.☆100Jun 29, 2025Updated 8 months ago
- Clean RL implementation using MLX☆35Mar 8, 2024Updated last year
- The easiest way to run the fastest MLX-based LLMs locally☆314Oct 30, 2024Updated last year
- Run large models from the terminal using Apple MLX.☆31Mar 18, 2024Updated last year
- MLX native implementations of state-of-the-art generative image models☆1,847Updated this week
- 📋 NotebookMLX - An Open Source version of NotebookLM (Ported NotebookLlama)☆338Mar 3, 2025Updated 11 months ago
- LLM based agents with proactive interactions, long-term memory, external tool integration, and local deployment capabilities.☆109Jul 29, 2025Updated 7 months ago
- MLX Omni Server is a local inference server powered by Apple's MLX framework, specifically designed for Apple Silicon (M-series) chips. I…☆668Dec 21, 2025Updated 2 months ago