argmaxinc / DiffusionKit
On-device Inference of Diffusion Models for Apple Silicon
☆434Updated last week
Related projects: ⓘ
- A MLX port of FLUX based on the Huggingface Diffusers implementation.☆652Updated this week
- SiLLM simplifies the process of training and running Large Language Models (LLMs) on Apple Silicon by leveraging the MLX framework.☆207Updated last week
- Phi-3.5 for Mac: Locally-run Vision and Language Models for Apple Silicon☆206Updated last week
- A simple UI / Web / Frontend for MLX mlx-lm using Streamlit.☆219Updated 2 months ago
- ☆101Updated this week
- MLX-VLM is a package for running Vision LLMs locally on your Mac using MLX.☆187Updated this week
- FastMLX is a high performance production ready API to host MLX models.☆163Updated last week
- 🤖✨ChatMLX is a modern, open-source, high-performance chat application for MacOS based on large language models.☆195Updated last week
- An extremely fast implementation of whisper optimized for Apple Silicon using MLX.☆519Updated 4 months ago
- Large Language Models (LLMs) applications and tools running on Apple Silicon in real-time with Apple MLX.☆310Updated 3 weeks ago
- "Deep Dive into AI with MLX and PyTorch" is an educational initiative designed to help anyone interested in AI, specifically in machine l…☆345Updated 5 months ago
- Start a server from the MLX library.☆157Updated last month
- Examples using MLX Swift☆627Updated last week
- A multi-platform SwiftUI frontend for running local LLMs with Apple's MLX framework.☆342Updated 3 weeks ago
- ☆91Updated last month
- The easiest way to run the fastest MLX-based LLMs locally☆202Updated 2 months ago
- LLM, MultiModal, and Agent tools for ComfyUI☆312Updated last month
- Swift Core ML Examples☆143Updated this week
- Chat with MLX is a high-performance macOS application that connects your local documents to a personalized large language model (LLM).☆158Updated 6 months ago
- Swift library to work with llama and other large language models.☆205Updated last week
- Explore a simple example of utilizing MLX for RAG application running locally on your Apple Silicon device.☆144Updated 7 months ago
- Python tools for WhisperKit: Model conversion, optimization and evaluation☆151Updated 5 months ago
- Mac compatible Ollama Voice☆401Updated 5 months ago
- FlashAttention (Metal Port)☆358Updated 3 weeks ago
- Fast parallel LLM inference for MLX☆118Updated 2 months ago
- llama.cpp with BakLLaVA model describes what does it see☆378Updated 10 months ago
- Benchmark of Apple MLX operations on all Apple Silicon chips (GPU, CPU) + MPS and CUDA.☆112Updated 2 weeks ago
- An application for running LLMs locally on your device, with your documents, facilitating detailed citations in generated responses.☆459Updated this week
- Generate accurate transcripts using Apple's MLX framework☆154Updated last week
- Whisper with Medusa heads☆774Updated last week