nath1295 / MLX-TextgenLinks
A python package for serving LLM on OpenAI-compatible API endpoints with prompt caching using MLX.
☆89Updated 2 weeks ago
Alternatives and similar repositories for MLX-Textgen
Users that are interested in MLX-Textgen are comparing it to the libraries listed below
Sorting:
- Distributed Inference for mlx LLm☆93Updated 11 months ago
- Very basic framework for composable parameterized large language model (Q)LoRA / (Q)Dora fine-tuning using mlx, mlx_lm, and OgbujiPT.☆42Updated 3 weeks ago
- For inferring and serving local LLMs using the MLX framework☆104Updated last year
- GenAI & agent toolkit for Apple Silicon Mac, implementing JSON schema-steered structured output (3SO) and tool-calling in Python. For mor…☆128Updated last month
- ☆24Updated 5 months ago
- Experimental LLM Inference UX to aid in creative writing☆114Updated 7 months ago
- Guaranteed Structured Output from any Language Model via Hierarchical State Machines☆140Updated last month
- ☆38Updated last year
- Serving LLMs in the HF-Transformers format via a PyFlask API☆71Updated 10 months ago
- Grammar checker with a keyboard shortcut for Ollama and Apple MLX with Automator on macOS.☆82Updated last year
- ☆131Updated 2 months ago
- Glyphs, acting as collaboratively defined symbols linking related concepts, add a layer of multidimensional semantic richness to user-AI …☆49Updated 5 months ago
- A little file for doing LLM-assisted prompt expansion and image generation using Flux.schnell - complete with prompt history, prompt queu…☆26Updated 11 months ago
- ☆87Updated 5 months ago
- The hearth of The Pulsar App, fast, secure and shared inference with modern UI☆57Updated 7 months ago
- Train Large Language Models on MLX.☆126Updated this week
- A simple Jupyter Notebook for learning MLX text-completion fine-tuning!☆120Updated 8 months ago
- Fast parallel LLM inference for MLX☆198Updated last year
- Run multiple resource-heavy Large Models (LM) on the same machine with limited amount of VRAM/other resources by exposing them on differe…☆67Updated 2 weeks ago
- A simple MLX implementation for pretraining LLMs on Apple Silicon.☆81Updated 2 months ago
- klmbr - a prompt pre-processing technique to break through the barrier of entropy while generating text with LLMs☆78Updated 9 months ago
- ☆101Updated last month
- This project is a reverse-engineered version of Figma's tone changer. It uses Groq's Llama-3-8b for high-speed inference and to adjust th…☆89Updated 11 months ago
- Dagger functions to import Hugging Face GGUF models into a local ollama instance and optionally push them to ollama.com.☆116Updated last year
- ☆131Updated 2 months ago
- Implementation of nougat that focuses on processing pdf locally.☆81Updated 6 months ago
- Conduct in-depth research with AI-driven insights : DeepDive is a command-line tool that leverages web searches and AI models to generate…☆42Updated 10 months ago
- A simple experiment on letting two local LLM have a conversation about anything!☆110Updated last year
- MLX-GUI MLX Inference Server☆69Updated this week
- This small API downloads and exposes access to NeuML's txtai-wikipedia and full wikipedia datasets, taking in a query and returning full …☆97Updated 3 months ago