qnguyen3 / chat-with-mlx
An all-in-one LLMs Chat UI for Apple Silicon Mac using MLX Framework.
☆1,520Updated 5 months ago
Alternatives and similar repositories for chat-with-mlx:
Users that are interested in chat-with-mlx are comparing it to the libraries listed below
- Examples in the MLX framework☆6,921Updated this week
- A MLX port of FLUX based on the Huggingface Diffusers implementation.☆1,195Updated this week
- Making the community's best AI chat models available to everyone.☆1,917Updated last week
- MLX-VLM is a package for inference and fine-tuning of Vision Language Models (VLMs) on your Mac using MLX.☆828Updated last week
- 🤖✨ChatMLX is a modern, open-source, high-performance chat application for MacOS based on large language models.☆687Updated 3 months ago
- On-device Diffusion Models for Apple Silicon☆577Updated 2 months ago
- An extremely fast implementation of whisper optimized for Apple Silicon using MLX.☆643Updated 9 months ago
- 👾🍎 Apple MLX engine for LM Studio☆381Updated this week
- Large Language Models (LLMs) applications and tools running on Apple Silicon in real-time with Apple MLX.☆410Updated 2 weeks ago
- Examples using MLX Swift☆1,530Updated this week
- Mac app for Ollama☆1,614Updated last week
- Enchanted is iOS and macOS app for chatting with private self hosted language models such as Llama2, Mistral or Vicuna using Ollama.☆4,825Updated 2 weeks ago
- llama and other large language models on iOS and MacOS offline using GGML library.☆1,580Updated 2 weeks ago
- SiLLM simplifies the process of training and running Large Language Models (LLMs) on Apple Silicon by leveraging the MLX framework.☆252Updated this week
- Finetune ALL LLMs with ALL Adapeters on ALL Platforms!☆312Updated 3 weeks ago
- Chat with MLX is a high-performance macOS application that connects your local documents to a personalized large language model (LLM).☆169Updated 11 months ago
- The easiest way to run the fastest MLX-based LLMs locally☆251Updated 3 months ago
- Generate accurate transcripts using Apple's MLX framework☆368Updated 2 months ago
- Use Ollama to talk to local LLMs in Apple Notes☆639Updated 4 months ago
- Implementation of F5-TTS in MLX☆470Updated last week
- A simple UI / Web / Frontend for MLX mlx-lm using Streamlit.☆241Updated 2 weeks ago
- AirLLM 70B inference with single 4GB GPU☆5,666Updated 2 months ago
- ☆135Updated last month
- Benchmark of Apple MLX operations on all Apple Silicon chips (GPU, CPU) + MPS and CUDA.☆152Updated 3 months ago
- An self-improving embodied conversational agent seamlessly integrated into the operating system to automate our daily tasks.☆1,597Updated 5 months ago
- Distributed LLM and StableDiffusion inference for mobile, desktop and server.☆2,763Updated 3 months ago
- TTS with kokoro and onnx runtime☆1,556Updated last week
- Mac compatible Ollama Voice☆460Updated 10 months ago
- FastMLX is a high performance production ready API to host MLX models.☆260Updated 2 months ago
- [ICLR 2025] LongWriter: Unleashing 10,000+ Word Generation from Long Context LLMs☆1,594Updated 3 months ago