KoboldAI / KoboldAI-ClientLinks
For GGUF support, see KoboldCPP: https://github.com/LostRuins/koboldcpp
☆3,704Updated 5 months ago
Alternatives and similar repositories for KoboldAI-Client
Users that are interested in KoboldAI-Client are comparing it to the libraries listed below
Sorting:
- Atmospheric adventure chat for AI language models (KoboldAI, NovelAI, Pygmalion, OpenAI chatgpt, gpt-4)☆2,468Updated last week
- KoboldAI is generative AI software optimized for fictional use, but capable of much more!☆410Updated 5 months ago
- Run GGUF models easily with a KoboldAI UI. One File. Zero Install.☆7,595Updated this week
- A crowdsourced distributed cluster for AI art and text generation☆1,238Updated this week
- AI Agnostic (Multi-user and Multi-bot) Chat with Fictional Characters. Designed with scale in mind.☆604Updated this week
- Simplified installers for oobabooga/text-generation-webui.☆563Updated last year
- Extensions API for SillyTavern.☆629Updated 6 months ago
- ☆635Updated last week
- A more memory-efficient rewrite of the HF transformers implementation of Llama for use with quantized weights.☆2,882Updated last year
- Web UI to run alpaca model locally☆870Updated 2 years ago
- SD.Next: All-in-one WebUI for AI generative image and video creation☆6,383Updated this week
- 🌸 Run LLMs at home, BitTorrent-style. Fine-tuning and inference up to 10x faster than offloading☆9,675Updated 9 months ago
- Lord of Large Language and Multi modal Systems Web User Interface☆4,683Updated this week
- LLM UI with advanced features, easy setup, and multiple backend support.☆44,023Updated this week
- Make your own story. User-friendly software for LLM roleplaying☆1,059Updated this week
- A fast inference library for running LLMs locally on modern consumer-class GPUs☆4,213Updated 2 weeks ago
- A llama.cpp drop-in replacement for OpenAI's GPT endpoints, allowing GPT-powered apps to run off local llama.cpp models instead of OpenAI…☆598Updated 2 years ago
- The simplest way to run Alpaca (and other LLaMA-based local LLMs) on your own computer☆1,311Updated last year
- The official API server for Exllama. OAI compatible, lightweight, and fast.☆987Updated this week
- The simplest way to run LLaMA on your local machine☆13,070Updated last year
- fast-stable-diffusion + DreamBooth☆7,764Updated 2 weeks ago
- ☆10,908Updated this week
- Python bindings for llama.cpp☆9,257Updated last month
- Feature showcase for stable-diffusion-webui☆1,016Updated last year
- 4 bits quantization of LLaMA using GPTQ☆3,057Updated 11 months ago
- Stable Diffusion web UI UX☆1,064Updated 7 months ago
- Prototype UI for chatting with the Pygmalion models.☆235Updated 2 years ago
- An easy-to-use LLMs quantization package with user-friendly apis, based on GPTQ algorithm.☆4,873Updated 2 months ago
- Locally run an Instruction-Tuned Chat-Style LLM☆10,227Updated 2 years ago
- Stable Diffusion web UI☆7,897Updated 10 months ago