riccardomusmeci / mlx-llm
Large Language Models (LLMs) applications and tools running on Apple Silicon in real-time with Apple MLX.
☆328Updated 2 months ago
Related projects ⓘ
Alternatives and complementary repositories for mlx-llm
- FastMLX is a high performance production ready API to host MLX models.☆212Updated last week
- MLX-VLM is a package for running Vision LLMs locally on your Mac using MLX.☆471Updated this week
- A simple UI / Web / Frontend for MLX mlx-lm using Streamlit.☆225Updated last month
- SiLLM simplifies the process of training and running Large Language Models (LLMs) on Apple Silicon by leveraging the MLX framework.☆223Updated last week
- Start a server from the MLX library.☆159Updated 3 months ago
- 👾🍎 Apple MLX engine for LM Studio☆171Updated this week
- Phi-3.5 for Mac: Locally-run Vision and Language Models for Apple Silicon☆235Updated 2 months ago
- Fast parallel LLM inference for MLX☆146Updated 4 months ago
- On-device Inference of Diffusion Models for Apple Silicon☆494Updated last week
- run embeddings in MLX☆72Updated last month
- Explore a simple example of utilizing MLX for RAG application running locally on your Apple Silicon device.☆146Updated 9 months ago
- An extremely fast implementation of whisper optimized for Apple Silicon using MLX.☆577Updated 6 months ago
- MLX-Embeddings is the best package for running Vision and Language Embedding models locally on your Mac using MLX.☆75Updated 3 weeks ago
- An mlx project to train a base model on your whatsapp chats using (Q)Lora finetuning☆159Updated 9 months ago
- Benchmark of Apple MLX operations on all Apple Silicon chips (GPU, CPU) + MPS and CUDA.☆123Updated 2 months ago
- The easiest way to run the fastest MLX-based LLMs locally☆208Updated last week
- For inferring and serving local LLMs using the MLX framework☆89Updated 7 months ago
- 1.58 Bit LLM on Apple Silicon using MLX☆134Updated 5 months ago
- ☆148Updated 3 months ago
- Generate Synthetic Data Using OpenAI, MistralAI or AnthropicAI☆222Updated 6 months ago
- ☆96Updated 2 months ago
- Chat with MLX is a high-performance macOS application that connects your local documents to a personalized large language model (LLM).☆162Updated 8 months ago
- ☆275Updated last month
- WIP - Allows you to create DSPy pipelines using ComfyUI☆179Updated 3 months ago
- Efficient framework-agnostic data loading☆362Updated 2 months ago
- Blazing fast whisper turbo for ASR (speech-to-text) tasks☆143Updated 2 weeks ago
- A comprehensive repository of reasoning tasks for LLMs (and beyond)☆273Updated last month
- Fully fine-tune large models like Mistral, Llama-2-13B, or Qwen-14B completely for free☆219Updated last week
- Port of Andrej Karpathy's nanoGPT to Apple MLX framework.☆97Updated 8 months ago
- Distributed Inference for mlx LLm☆68Updated 3 months ago