perk11 / large-model-proxyLinks
Run multiple resource-heavy Large Models (LM) on the same machine with limited amount of VRAM/other resources by exposing them on different ports and loading/unloading them on demand
☆82Updated this week
Alternatives and similar repositories for large-model-proxy
Users that are interested in large-model-proxy are comparing it to the libraries listed below
Sorting:
- Chat WebUI is an easy-to-use user interface for interacting with AI, and it comes with multiple useful built-in tools such as web search …☆44Updated 2 weeks ago
- Easily view and modify JSON datasets for large language models☆82Updated 4 months ago
- A frontend for creative writing with LLMs☆134Updated last year
- ☆209Updated last week
- ☆20Updated 11 months ago
- A real-time shared memory layer for multi-agent LLM systems.☆47Updated 2 months ago
- Open source LLM UI, compatible with all local LLM providers.☆174Updated 11 months ago
- Transplants vocabulary between language models, enabling the creation of draft models for speculative decoding WITHOUT retraining.☆42Updated last week
- Guaranteed Structured Output from any Language Model via Hierarchical State Machines☆146Updated 3 months ago
- Glyphs, acting as collaboratively defined symbols linking related concepts, add a layer of multidimensional semantic richness to user-AI …☆52Updated 7 months ago
- Experimental LLM Inference UX to aid in creative writing☆121Updated 9 months ago
- CaSIL is an advanced natural language processing system that implements a sophisticated four-layer semantic analysis architecture. It pro…☆66Updated 10 months ago
- klmbr - a prompt pre-processing technique to break through the barrier of entropy while generating text with LLMs☆80Updated 11 months ago
- Serving LLMs in the HF-Transformers format via a PyFlask API☆71Updated last year
- "a towel is about the most massively useful thing an interstellar AI hitchhiker can have"☆48Updated 11 months ago
- ☆30Updated 11 months ago
- InferX is a Inference Function as a Service Platform☆133Updated this week
- Smart proxy for LLM APIs that enables model-specific parameter control, automatic mode switching (like Qwen3's /think and /no_think), and…☆50Updated 3 months ago
- ☆24Updated 7 months ago
- A python package for serving LLM on OpenAI-compatible API endpoints with prompt caching using MLX.☆95Updated 2 months ago
- Adding a multi-text multi-speaker script (diffe) that is based on a script from asiff00 on issue 61 for Sesame: A Conversational Speech G…☆25Updated 5 months ago
- A Field-Theoretic Approach to Unbounded Memory in Large Language Models☆20Updated 5 months ago
- This small API downloads and exposes access to NeuML's txtai-wikipedia and full wikipedia datasets, taking in a query and returning full …☆100Updated 3 weeks ago
- ☆165Updated last month
- Super simple python connectors for llama.cpp, including vision models (Gemma 3, Qwen2-VL). Compile llama.cpp and run!☆28Updated last month
- ☆132Updated 4 months ago
- This extension enhances the capabilities of textgen-webui by integrating advanced vision models, allowing users to have contextualized co…☆57Updated 10 months ago
- Who needs o1 anyways. Add CoT to any OpenAI compatible endpoint.☆44Updated last year
- A TTS model capable of generating ultra-realistic dialogue in one pass.☆31Updated 4 months ago
- automatically quant GGUF models☆200Updated this week