ml-explore / mlx-lmLinks
Run LLMs with MLX
☆2,594Updated this week
Alternatives and similar repositories for mlx-lm
Users that are interested in mlx-lm are comparing it to the libraries listed below
Sorting:
- MLX-VLM is a package for inference and fine-tuning of Vision Language Models (VLMs) on your Mac using MLX.☆1,685Updated this week
- LM Studio Apple MLX engine☆790Updated last week
- A text-to-speech (TTS), speech-to-text (STT) and speech-to-speech (STS) library built on Apple's MLX framework, providing efficient speec…☆2,736Updated 2 weeks ago
- MLX Omni Server is a local inference server powered by Apple's MLX framework, specifically designed for Apple Silicon (M-series) chips. I…☆580Updated last month
- A MLX port of FLUX based on the Huggingface Diffusers implementation.☆1,569Updated last week
- Embedding Atlas is a tool that provides interactive visualizations for large embeddings. It allows you to visualize, cross-filter, and se…☆3,927Updated this week
- Artificial Neural Engine Machine Learning Library☆1,205Updated last month
- Big & Small LLMs working together☆1,181Updated this week
- ☆1,743Updated this week
- Renderer for the harmony response format to be used with gpt-oss☆3,879Updated last month
- VS Code extension for LLM-assisted code/text completion☆988Updated this week
- Making the community's best AI chat models available to everyone.☆1,982Updated 8 months ago
- Building blocks for rapid development of GenAI applications☆1,582Updated this week
- This repository contains the official implementation of "FastVLM: Efficient Vision Encoding for Vision Language Models" - CVPR 2025☆6,746Updated 5 months ago
- Open Source Application for Advanced LLM + Diffusion Engineering: interact, train, fine-tune, and evaluate large language models on your …☆4,417Updated this week
- Examples in the MLX framework☆7,914Updated this week
- FastMLX is a high performance production ready API to host MLX models.☆330Updated 6 months ago
- On-device Image Generation for Apple Silicon☆659Updated 6 months ago
- An implementation of the Nvidia's Parakeet models for Apple Silicon using MLX.☆515Updated last week
- Native, Apple Silicon–only local LLM server. Similar to Ollama, but built on Apple's MLX for maximum performance on M‑series chips. Swift…☆1,415Updated this week
- Lemonade helps users run local LLMs with the highest performance by configuring state-of-the-art inference engines for their NPUs and GPU…☆1,426Updated this week
- llama.cpp fork with additional SOTA quants and improved performance☆1,246Updated this week
- llama and other large language models on iOS and MacOS offline using GGML library.☆1,888Updated 3 weeks ago
- Build, enrich, and transform datasets using AI models with no code☆1,510Updated this week
- Recursive-Open-Meta-Agent v0.1 (Beta). A meta-agent framework to build high-performance multi-agent systems.☆3,709Updated 2 weeks ago
- Supercharge Your LLM with the Fastest KV Cache Layer☆5,506Updated this week
- Collection of apple-native tools for the model context protocol.☆2,689Updated 2 months ago
- LM Studio Python SDK☆657Updated last month
- Everything about the SmolLM and SmolVLM family of models☆3,300Updated 3 weeks ago
- Lightweight coding agent that runs in your terminal☆2,113Updated 5 months ago