antranapp / awesome-mlxLinks
☆197Updated 10 months ago
Alternatives and similar repositories for awesome-mlx
Users that are interested in awesome-mlx are comparing it to the libraries listed below
Sorting:
- FastMLX is a high performance production ready API to host MLX models.☆342Updated 10 months ago
- MLX-Embeddings is the best package for running Vision and Language Embedding models locally on your Mac using MLX.☆269Updated 3 weeks ago
- Explore a simple example of utilizing MLX for RAG application running locally on your Apple Silicon device.☆178Updated 2 years ago
- SiLLM simplifies the process of training and running Large Language Models (LLMs) on Apple Silicon by leveraging the MLX framework.☆284Updated 7 months ago
- Chat with MLX is a high-performance macOS application that connects your local documents to a personalized large language model (LLM).☆176Updated last year
- The easiest way to run the fastest MLX-based LLMs locally☆310Updated last year
- Phi-3.5 for Mac: Locally-run Vision and Language Models for Apple Silicon☆273Updated 2 months ago
- A simple UI / Web / Frontend for MLX mlx-lm using Streamlit.☆260Updated 3 months ago
- Large Language Models (LLMs) applications and tools running on Apple Silicon in real-time with Apple MLX.☆458Updated last year
- For inferring and serving local LLMs using the MLX framework☆110Updated last year
- mlx image models for Apple Silicon machines☆91Updated 2 months ago
- Python tools for WhisperKit: Model conversion, optimization and evaluation☆236Updated 3 months ago
- MLX Model Manager unifies loading and inferencing with LLMs and VLMs.☆103Updated last year
- ☆77Updated last year
- Start a server from the MLX library.☆196Updated last year
- run embeddings in MLX☆97Updated last year
- Blazing fast whisper turbo for ASR (speech-to-text) tasks☆218Updated 2 months ago
- 📋 NotebookMLX - An Open Source version of NotebookLM (Ported NotebookLlama)☆335Updated 11 months ago
- Benchmark of Apple MLX operations on all Apple Silicon chips (GPU, CPU) + MPS and CUDA.☆214Updated last month
- MLX Omni Server is a local inference server powered by Apple's MLX framework, specifically designed for Apple Silicon (M-series) chips. I…☆656Updated last month
- An LLM agnostic desktop and mobile client.☆315Updated 4 months ago
- CLI to demonstrate running a large language model (LLM) on Apple Neural Engine.☆121Updated last year
- ☆307Updated 9 months ago
- Train Large Language Models on MLX.☆245Updated last week
- MLX Transformers is a library that provides model implementation in MLX. It uses a similar model interface as HuggingFace Transformers an…☆72Updated last year
- GenAI & agent toolkit for Apple Silicon Mac, implementing JSON schema-steered structured output (3SO) and tool-calling in Python. For mor…☆132Updated last month
- ☆129Updated 7 months ago
- CLI tool for text to image generation using the FLUX.1 model.☆67Updated 7 months ago
- Your gateway to both Ollama & Apple MlX models☆150Updated 11 months ago
- Distributed Inference for mlx LLm