theroyallab / tabbyAPILinks
The official API server for Exllama. OAI compatible, lightweight, and fast.
☆1,047Updated 2 weeks ago
Alternatives and similar repositories for tabbyAPI
Users that are interested in tabbyAPI are comparing it to the libraries listed below
Sorting:
- Web UI for ExLlamaV2☆513Updated 7 months ago
- An optimized quantization and inference library for running LLMs locally on modern consumer-class GPUs☆491Updated last week
- Large-scale LLM inference engine☆1,543Updated this week
- LLM Frontend in a single html file☆643Updated 7 months ago
- Model swapping for llama.cpp (or any local OpenAI API compatible server)☆1,499Updated this week
- A multimodal, function calling powered LLM webui.☆216Updated 11 months ago
- A fast inference library for running LLMs locally on modern consumer-class GPUs☆4,309Updated 3 weeks ago
- Comparison of the output quality of quantization methods, using Llama 3, transformers, GGUF, EXL2.☆164Updated last year
- ☆657Updated 3 weeks ago
- llama.cpp fork with additional SOTA quants and improved performance☆1,165Updated this week
- An extension for oobabooga/text-generation-webui that enables the LLM to search the web☆267Updated this week
- An AI assistant beyond the chat box.☆328Updated last year
- Your Trusty Memory-enabled AI Companion - Simple RAG chatbot optimized for local LLMs | 12 Languages Supported | OpenAI API Compatible☆337Updated 6 months ago
- An OpenAI API compatible text to speech server using Coqui AI's xtts_v2 and/or piper tts as the backend.☆808Updated 7 months ago
- An OpenAI API compatible API for chat with image input and questions about the images. aka Multimodal.☆259Updated 6 months ago
- Dolphin System Messages☆346Updated 6 months ago
- The llama-cpp-agent framework is a tool designed for easy interaction with Large Language Models (LLMs). Allowing users to chat with LLM …☆589Updated 6 months ago
- KoboldAI is generative AI software optimized for fictional use, but capable of much more!☆416Updated 7 months ago
- AI Inferencing at the Edge. A simple one-file way to run various GGML models with KoboldAI's UI with AMD ROCm offloading☆686Updated 2 weeks ago
- ☆83Updated last week
- What If Language Models Expertly Routed All Inference? WilmerAI allows prompts to be routed to specialized workflows based on the domain …☆768Updated last week
- Simple Python library/structure to ablate features in LLMs which are supported by TransformerLens☆506Updated last year
- Memoir+ a persona memory extension for Text Gen Web UI.☆214Updated 3 weeks ago
- AlwaysReddy is a LLM voice assistant that is always just a hotkey away.☆754Updated 6 months ago
- A more memory-efficient rewrite of the HF transformers implementation of Llama for use with quantized weights.☆2,898Updated last year
- Simple go utility to download HuggingFace Models and Datasets☆734Updated last week
- A simple FastAPI Server to run XTTSv2☆537Updated last year
- Croco.Cpp is fork of KoboldCPP infering GGML/GGUF models on CPU/Cuda with KoboldAI's UI. It's powered partly by IK_LLama.cpp, and compati…☆137Updated this week
- A zero dependency web UI for any LLM backend, including KoboldCpp, OpenAI and AI Horde☆135Updated this week
- The RunPod worker template for serving our large language model endpoints. Powered by vLLM.☆365Updated last week