huggingface chat-ui integration with mlx-lm server
☆62Feb 13, 2024Updated 2 years ago
Alternatives and similar repositories for mlx-chat-ui
Users that are interested in mlx-chat-ui are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- A simple script to enhance text editing across your Mac, leveraging the power of MLX. Designed for seamless integration, it offers real-t…☆110Mar 4, 2024Updated 2 years ago
- Very basic framework for composable parameterized large language model (Q)LoRA / (Q)Dora fine-tuning using mlx, mlx_lm, and OgbujiPT.☆42Jun 20, 2025Updated 10 months ago
- ☆15May 17, 2024Updated last year
- A simple UI / Web / Frontend for MLX mlx-lm using Streamlit.☆263Oct 25, 2025Updated 6 months ago
- Gradio chat interface for FastMLX☆12Sep 22, 2024Updated last year
- End-to-end encrypted email - Proton Mail • AdSpecial offer: 40% Off Yearly / 80% Off First Month. All Proton services are open source and independently audited for security.
- A chatbot UI for RAG, multimodal, text completion. (support Transformers, llama.cpp, MLX, vLLM)☆20Apr 18, 2024Updated 2 years ago
- Scripts to create your own moe models using mlx☆89Feb 26, 2024Updated 2 years ago
- ☆40Oct 15, 2023Updated 2 years ago
- ☆38Mar 12, 2024Updated 2 years ago
- Start a server from the MLX library.☆199Jul 26, 2024Updated last year
- Simple Implementation of a Transformer in the new framework MLX by Apple☆19Nov 18, 2024Updated last year
- Train Large Language Models on MLX.☆363Apr 23, 2026Updated last week
- AgentParse is a high-performance parsing library designed to map various structured data formats (such as Pydantic models, JSON, YAML, an…☆18Oct 13, 2025Updated 6 months ago
- For inferring and serving local LLMs using the MLX framework☆114Mar 24, 2024Updated 2 years ago
- Virtual machines for every use case on DigitalOcean • AdGet dependable uptime with 99.99% SLA, simple security tools, and predictable monthly pricing with DigitalOcean's virtual machines, called Droplets.
- Roberta Question Answering using MLX.☆24Feb 22, 2026Updated 2 months ago
- A tiny server to run local inference on MLX model in the style of OpenAI☆13Jan 31, 2024Updated 2 years ago
- Apps that run on modal.com☆13Sep 14, 2025Updated 7 months ago
- Multi-threading, Concurrency, Asynchrony, and various Execution Methods implemented in a Rust backend for bleeding edge performance.☆20Nov 11, 2024Updated last year
- ☆18Dec 18, 2023Updated 2 years ago
- Minimal, clean code implementation of RAG with mlx using gguf model weights☆53Apr 27, 2024Updated 2 years ago
- Clean RL implementation using MLX☆34Mar 8, 2024Updated 2 years ago
- ☆20Oct 25, 2025Updated 6 months ago
- Instant Perfect Native MacOS Transcription☆54Jul 26, 2025Updated 9 months ago
- Deploy on Railway without the complexity - Free Credits Offer • AdConnect your repo and Railway handles the rest with instant previews. Quickly provision container image services, databases, and storage volumes.
- SiLLM simplifies the process of training and running Large Language Models (LLMs) on Apple Silicon by leveraging the MLX framework.☆286Jun 16, 2025Updated 10 months ago
- MLX Image Models☆24Mar 14, 2024Updated 2 years ago
- MLX Swift implementation of Andrej Karpathy's Let's build GPT video☆64Apr 14, 2024Updated 2 years ago
- This is a FastAPI based LLM server. Load multiple LLM models (MLX or llama.cpp) simultaneously using multiprocessing.☆17Apr 8, 2026Updated 3 weeks ago
- Open TTS models, built for streaming on the edge☆45Mar 16, 2025Updated last year
- run embeddings in MLX☆98Sep 27, 2024Updated last year
- A Next.js chatbot app demonstrating seamless integration with window.ai.☆15Jun 25, 2023Updated 2 years ago
- Grammar checker with a keyboard shortcut for Ollama and Apple MLX with Automator on macOS.☆81Feb 5, 2024Updated 2 years ago
- Test your local LLMs on the AIME problems☆35Jun 7, 2025Updated 10 months ago
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click. Zero configuration with optimized deployments.
- Generate train.jsonl and valid.jsonl files to use for fine-tuning Mistral and other LLMs.☆97Feb 5, 2024Updated 2 years ago
- ☆15Sep 8, 2023Updated 2 years ago
- ☆26Dec 13, 2024Updated last year
- An all-in-one LLMs Chat UI for Apple Silicon Mac using MLX Framework.☆1,595Sep 6, 2024Updated last year
- MLX implementation of GCN, with benchmark on MPS, CUDA and CPU (M1 Pro, M2 Ultra, M3 Max).☆25Dec 16, 2023Updated 2 years ago
- Code and Data Repo for the CoNLL Paper -- Future Lens: Anticipating Subsequent Tokens from a Single Hidden State☆21Oct 24, 2025Updated 6 months ago
- Distributed Inference for mlx LLm☆101Aug 1, 2024Updated last year