nath1295 / MLX-Textgen
A python package for serving LLM on OpenAI-compatible API endpoints with prompt caching using MLX.
☆80Updated 5 months ago
Alternatives and similar repositories for MLX-Textgen
Users that are interested in MLX-Textgen are comparing it to the libraries listed below
Sorting:
- ☆24Updated 3 months ago
- Serving LLMs in the HF-Transformers format via a PyFlask API☆71Updated 8 months ago
- ☆38Updated last year
- For inferring and serving local LLMs using the MLX framework☆103Updated last year
- Very basic framework for composable parameterized large language model (Q)LoRA / (Q)Dora fine-tuning using mlx, mlx_lm, and OgbujiPT.☆40Updated 2 months ago
- Distributed Inference for mlx LLm☆91Updated 9 months ago
- Guaranteed Structured Output from any Language Model via Hierarchical State Machines☆128Updated 2 weeks ago
- klmbr - a prompt pre-processing technique to break through the barrier of entropy while generating text with LLMs☆71Updated 7 months ago
- A little file for doing LLM-assisted prompt expansion and image generation using Flux.schnell - complete with prompt history, prompt queu…☆26Updated 9 months ago
- MLX-Embeddings is the best package for running Vision and Language Embedding models locally on your Mac using MLX.☆150Updated 3 weeks ago
- A command-line utility to manage MLX models between your Hugging Face cache and LM Studio.☆39Updated 2 months ago
- ☆82Updated 3 months ago
- Glyphs, acting as collaboratively defined symbols linking related concepts, add a layer of multidimensional semantic richness to user-AI …☆49Updated 3 months ago
- ☆114Updated 4 months ago
- ☆72Updated last week
- GenAI & agent toolkit for Apple Silicon Mac, implementing JSON schema-steered structured output (3SO) and tool-calling in Python. For mor…☆123Updated this week
- Run multiple resource-heavy Large Models (LM) on the same machine with limited amount of VRAM/other resources by exposing them on differe…☆61Updated this week
- Grammar checker with a keyboard shortcut for Ollama and Apple MLX with Automator on macOS.☆80Updated last year
- ☆66Updated 11 months ago
- ☆130Updated 2 weeks ago
- A simple MLX implementation for pretraining LLMs on Apple Silicon.☆75Updated 2 weeks ago
- run ollama & gguf easily with a single command☆50Updated last year
- Local LLM inference & management server with built-in OpenAI API☆31Updated last year
- tiny_fnc_engine is a minimal python library that provides a flexible engine for calling functions extracted from a LLM.☆38Updated 8 months ago
- Self-hosted LLM chatbot arena, with yourself as the only judge☆40Updated last year
- Something similar to Apple Intelligence?☆60Updated 10 months ago
- Minimal, clean code implementation of RAG with mlx using gguf model weights☆50Updated last year
- Dagger functions to import Hugging Face GGUF models into a local ollama instance and optionally push them to ollama.com.☆115Updated 11 months ago
- Experimental LLM Inference UX to aid in creative writing☆116Updated 5 months ago
- Chat WebUI is an easy-to-use user interface for interacting with AI, and it comes with multiple useful built-in tools.☆29Updated 2 months ago