qnguyen3 / chat-with-mlxLinks
An all-in-one LLMs Chat UI for Apple Silicon Mac using MLX Framework.
β1,562Updated 8 months ago
Alternatives and similar repositories for chat-with-mlx
Users that are interested in chat-with-mlx are comparing it to the libraries listed below
Sorting:
- π€β¨ChatMLX is a modern, open-source, high-performance chat application for MacOS based on large language models.β788Updated 2 months ago
- Use Ollama to talk to local LLMs in Apple Notesβ681Updated 8 months ago
- Examples using MLX Swiftβ1,807Updated 2 weeks ago
- llama and other large language models on iOS and MacOS offline using GGML library.β1,777Updated 2 months ago
- Making the community's best AI chat models available to everyone.β1,958Updated 3 months ago
- Examples in the MLX frameworkβ7,444Updated last month
- Apple MLX engine for LM Studioβ564Updated last week
- An extremely fast implementation of whisper optimized for Apple Silicon using MLX.β706Updated last year
- A simple UI / Web / Frontend for MLX mlx-lm using Streamlit.β253Updated 4 months ago
- Large Language Models (LLMs) applications and tools running on Apple Silicon in real-time with Apple MLX.β443Updated 4 months ago
- Mac app for Ollamaβ1,840Updated 2 months ago
- A MLX port of FLUX based on the Huggingface Diffusers implementation.β1,369Updated this week
- SiLLM simplifies the process of training and running Large Language Models (LLMs) on Apple Silicon by leveraging the MLX framework.β267Updated last week
- mactop - Apple Silicon Monitor Topβ1,928Updated 4 months ago
- MLX-VLM is a package for inference and fine-tuning of Vision Language Models (VLMs) on your Mac using MLX.β1,293Updated this week
- Llama-3 agents that can browse the web by following instructions and talking to youβ1,404Updated 5 months ago
- Enchanted is iOS and macOS app for chatting with private self hosted language models such as Llama2, Mistral or Vicuna using Ollama.β5,365Updated 2 months ago
- HTML to Markdown converter and crawler.β539Updated last year
- β173Updated 2 months ago
- Chat with your favourite LLaMA models in a native macOS appβ1,501Updated last year
- FastMLX is a high performance production ready API to host MLX models.β305Updated 2 months ago
- Run LLMs with MLXβ877Updated this week
- A native macOS app for chatting with local LLMsβ386Updated 7 months ago
- Finetune ALL LLMs with ALL Adapeters on ALL Platforms!β320Updated 2 weeks ago
- llama.cpp based AI chat app for macOSβ487Updated 6 months ago
- The easiest way to run the fastest MLX-based LLMs locallyβ282Updated 7 months ago
- Mac compatible Ollama Voiceβ483Updated last year
- Build a Perplexity-Inspired Answer Engine Using Next.js, Groq, Llama-3, Langchain, OpenAI, Upstash, Brave & Serperβ4,896Updated 8 months ago
- Chat with MLX is a high-performance macOS application that connects your local documents to a personalized large language model (LLM).β174Updated last year
- β263Updated last year