mzbac / mlx-llm-server
For inferring and serving local LLMs using the MLX framework
☆77Updated 5 months ago
Related projects: ⓘ
- A simple UI / Web / Frontend for MLX mlx-lm using Streamlit.☆219Updated 2 months ago
- FastMLX is a high performance production ready API to host MLX models.☆163Updated last week
- ☆32Updated 2 weeks ago
- Fast parallel LLM inference for MLX☆118Updated 2 months ago
- MLX-Embeddings is the best package for running Vision and Language Embedding models locally on your Mac using MLX.☆60Updated 3 weeks ago
- ☆36Updated 6 months ago
- Generate train.jsonl and valid.jsonl files to use for fine-tuning Mistral and other LLMs.☆67Updated 7 months ago
- SiLLM simplifies the process of training and running Large Language Models (LLMs) on Apple Silicon by leveraging the MLX framework.☆207Updated last week
- ☆64Updated 3 months ago
- ☆101Updated 5 months ago
- Port of Suno's Bark TTS transformer in Apple's MLX Framework☆62Updated 7 months ago
- Scripts to create your own moe models using mlx☆86Updated 6 months ago
- 🤖 Headless IDE for AI agents☆110Updated this week
- ☆144Updated 2 months ago
- A simple Jupyter Notebook for learning MLX text-completion fine-tuning!☆85Updated this week
- Generate Synthetic Data Using OpenAI, MistralAI or AnthropicAI☆223Updated 4 months ago
- run embeddings in MLX☆68Updated last month
- ☆94Updated this week
- A fast batching API to serve LLM models☆172Updated 4 months ago
- mlx implementations of various transformers, speedups, training☆34Updated 9 months ago
- Start a server from the MLX library.☆157Updated last month
- Very basic framework for parameterized large language model (Q)LoRa fine-tuning using mlx, mlx_lm, and OgbujiPT. Architecture for system…☆32Updated last month
- Low-Rank adapter extraction for fine-tuned transformers model☆154Updated 4 months ago
- Gradio based tool to run opensource LLM models directly from Huggingface☆84Updated 2 months ago
- Phi-3.5 for Mac: Locally-run Vision and Language Models for Apple Silicon☆206Updated last week
- MLX-VLM is a package for running Vision LLMs locally on your Mac using MLX.☆187Updated this week
- One click templates for inferencing Language Models☆97Updated last week
- Explore a simple example of utilizing MLX for RAG application running locally on your Apple Silicon device.☆144Updated 7 months ago
- Client-side toolkit for using large language models, including where self-hosted☆101Updated last month
- Distributed Inference for mlx LLm☆57Updated last month