qnguyen3 / chat-with-mlx
An all-in-one LLMs Chat UI for Apple Silicon Mac using MLX Framework.
β1,554Updated 7 months ago
Alternatives and similar repositories for chat-with-mlx:
Users that are interested in chat-with-mlx are comparing it to the libraries listed below
- π€β¨ChatMLX is a modern, open-source, high-performance chat application for MacOS based on large language models.β772Updated last month
- llama and other large language models on iOS and MacOS offline using GGML library.β1,734Updated last month
- On-device Image Generation for Apple Siliconβ613Updated 2 weeks ago
- Examples in the MLX frameworkβ7,324Updated last month
- MLX-VLM is a package for inference and fine-tuning of Vision Language Models (VLMs) on your Mac using MLX.β1,200Updated this week
- Examples using MLX Swiftβ1,688Updated this week
- On-device Speech Recognition for Apple Siliconβ4,537Updated this week
- Use Ollama to talk to local LLMs in Apple Notesβ670Updated 6 months ago
- Apple MLX engine for LM Studioβ524Updated this week
- Making the community's best AI chat models available to everyone.β1,948Updated 2 months ago
- An extremely fast implementation of whisper optimized for Apple Silicon using MLX.β690Updated 11 months ago
- Generate accurate transcripts using Apple's MLX frameworkβ393Updated last month
- Mac app for Ollamaβ1,789Updated last month
- FastMLX is a high performance production ready API to host MLX models.β293Updated last month
- Run LLMs with MLXβ461Updated this week
- A simple UI / Web / Frontend for MLX mlx-lm using Streamlit.β250Updated 2 months ago
- A MLX port of FLUX based on the Huggingface Diffusers implementation.β1,335Updated this week
- Chat with MLX is a high-performance macOS application that connects your local documents to a personalized large language model (LLM).β173Updated last year
- Reaching LLaMA2 Performance with 0.1M Dollarsβ981Updated 9 months ago
- SiLLM simplifies the process of training and running Large Language Models (LLMs) on Apple Silicon by leveraging the MLX framework.β262Updated 2 weeks ago
- Large Language Models (LLMs) applications and tools running on Apple Silicon in real-time with Apple MLX.β438Updated 2 months ago
- Build a Perplexity-Inspired Answer Engine Using Next.js, Groq, Llama-3, Langchain, OpenAI, Upstash, Brave & Serperβ4,886Updated 6 months ago
- Benchmark of Apple MLX operations on all Apple Silicon chips (GPU, CPU) + MPS and CUDA.β174Updated 2 weeks ago
- Enchanted is iOS and macOS app for chatting with private self hosted language models such as Llama2, Mistral or Vicuna using Ollama.β5,217Updated last month
- Finetune ALL LLMs with ALL Adapeters on ALL Platforms!β317Updated 3 weeks ago
- Implementation of F5-TTS in MLXβ520Updated last month
- MLX Omni Server is a local inference server powered by Apple's MLX framework, specifically designed for Apple Silicon (M-series) chips. Iβ¦β335Updated 2 weeks ago
- Cohere Toolkit is a collection of prebuilt components enabling users to quickly build and deploy RAG applications.β3,033Updated last week
- The easiest way to run the fastest MLX-based LLMs locallyβ278Updated 5 months ago
- Reference implementation of the Transformer architecture optimized for Apple Neural Engine (ANE)β2,614Updated 2 years ago