qnguyen3 / chat-with-mlxLinks
An all-in-one LLMs Chat UI for Apple Silicon Mac using MLX Framework.
☆1,568Updated 9 months ago
Alternatives and similar repositories for chat-with-mlx
Users that are interested in chat-with-mlx are comparing it to the libraries listed below
Sorting:
- Examples in the MLX framework☆7,580Updated 2 weeks ago
- llama and other large language models on iOS and MacOS offline using GGML library.☆1,797Updated 3 months ago
- 🤖✨ChatMLX is a modern, open-source, high-performance chat application for MacOS based on large language models.☆794Updated 3 months ago
- A MLX port of FLUX based on the Huggingface Diffusers implementation.☆1,404Updated last week
- A simple UI / Web / Frontend for MLX mlx-lm using Streamlit.☆256Updated 2 weeks ago
- Examples using MLX Swift☆1,900Updated this week
- On-device Image Generation for Apple Silicon☆626Updated 2 months ago
- MLX-VLM is a package for inference and fine-tuning of Vision Language Models (VLMs) on your Mac using MLX.☆1,381Updated this week
- The easiest way to run the fastest MLX-based LLMs locally☆287Updated 7 months ago
- Making the community's best AI chat models available to everyone.☆1,968Updated 4 months ago
- Large Language Models (LLMs) applications and tools running on Apple Silicon in real-time with Apple MLX.☆446Updated 5 months ago
- Chat with MLX is a high-performance macOS application that connects your local documents to a personalized large language model (LLM).☆175Updated last year
- FastMLX is a high performance production ready API to host MLX models.☆308Updated 3 months ago
- Mac app for Ollama☆1,855Updated 3 months ago
- Video Search and Streaming Agent 🕵️♂️☆470Updated last year
- MLX: An array framework for Apple silicon☆21,224Updated this week
- Finetune ALL LLMs with ALL Adapeters on ALL Platforms!☆319Updated last week
- Apple MLX engine for LM Studio☆630Updated this week
- SiLLM simplifies the process of training and running Large Language Models (LLMs) on Apple Silicon by leveraging the MLX framework.☆273Updated last week
- ☆285Updated last year
- Run LLMs with MLX☆1,179Updated this week
- Multiple NVIDIA GPUs or Apple Silicon for Large Language Model Inference?☆1,670Updated last year
- Swift library to work with llama and other large language models.☆263Updated 5 months ago
- On-device Speech Recognition for Apple Silicon☆4,752Updated this week
- Use Ollama to talk to local LLMs in Apple Notes☆684Updated 8 months ago
- Phi-3.5 for Mac: Locally-run Vision and Language Models for Apple Silicon☆270Updated 9 months ago
- llama.cpp based AI chat app for macOS☆489Updated 7 months ago
- WikiChat is an improved RAG. It stops the hallucination of large language models by retrieving data from a corpus.☆1,465Updated 2 months ago
- An extremely fast implementation of whisper optimized for Apple Silicon using MLX.☆732Updated last year
- Yes, it's another chat over documents implementation... but this one is entirely local!☆1,774Updated 3 months ago