The official API server for Exllama. OAI compatible, lightweight, and fast.
☆1,154Mar 13, 2026Updated last week
Alternatives and similar repositories for tabbyAPI
Users that are interested in tabbyAPI are comparing it to the libraries listed below
Sorting:
- A fast inference library for running LLMs locally on modern consumer-class GPUs☆4,468Mar 4, 2026Updated 2 weeks ago
- Web UI for ExLlamaV2☆511Feb 5, 2025Updated last year
- An optimized quantization and inference library for running LLMs locally on modern consumer-class GPUs☆686Updated this week
- Large-scale LLM inference engine☆1,677Mar 12, 2026Updated last week
- ☆93Dec 9, 2025Updated 3 months ago
- A simple Gradio WebUI for loading/unloading models and loras in tabbyAPI.☆20Nov 21, 2024Updated last year
- Run GGUF models easily with a KoboldAI UI. One File. Zero Install.☆9,721Updated this week
- Loader extension for tabbyAPI in SillyTavern☆26Jun 30, 2025Updated 8 months ago
- Prompt Jinja2 templates for LLMs☆35Jul 9, 2025Updated 8 months ago
- A more memory-efficient rewrite of the HF transformers implementation of Llama for use with quantized weights.☆2,913Sep 30, 2023Updated 2 years ago
- A multimodal, function calling powered LLM webui.☆215Sep 23, 2024Updated last year
- Reliable model swapping for any local OpenAI/Anthropic compatible server - llama.cpp, vllm, etc☆2,807Updated this week
- llama.cpp fork with additional SOTA quants and improved performance☆1,809Mar 14, 2026Updated last week
- LLM Frontend in a single html file☆709Dec 27, 2025Updated 2 months ago
- Optimizing inference proxy for LLMs☆3,381Jan 28, 2026Updated last month
- LLM Frontend for Power Users.☆24,453Updated this week
- The original local LLM interface. Text, vision, tool-calling, training, and more. 100% offline.☆46,278Updated this week
- Croco.Cpp is fork of KoboldCPP infering GGML/GGUF models on CPU/Cuda with KoboldAI's UI. It's powered partly by IK_LLama.cpp, and compati…☆161Mar 12, 2026Updated last week
- ☆134Mar 14, 2026Updated last week
- A fast batching API to serve LLM models☆189Apr 26, 2024Updated last year
- Tools for merging pretrained large language models.☆6,867Updated this week
- Go ahead and axolotl questions☆11,460Updated this week
- Efficient visual programming for AI language models☆362May 13, 2025Updated 10 months ago
- An OpenAI API compatible LLM inference server based on ExLlamaV2.☆25Feb 9, 2024Updated 2 years ago
- One command brings a complete pre-wired LLM stack with hundreds of services to explore.☆2,508Updated this week
- ☆166Jun 22, 2025Updated 8 months ago
- ☆337Mar 5, 2026Updated 2 weeks ago
- AllTalk is based on the Coqui TTS engine, similar to the Coqui_tts extension for Text generation webUI, however supports a variety of adv…☆2,280Jan 9, 2026Updated 2 months ago
- Official Pytorch repository for Extreme Compression of Large Language Models via Additive Quantization https://arxiv.org/pdf/2401.06118.p…☆1,315Feb 26, 2026Updated 3 weeks ago
- Comparison of the output quality of quantization methods, using Llama 3, transformers, GGUF, EXL2.☆167May 16, 2024Updated last year
- KoboldAI is generative AI software optimized for fictional use, but capable of much more!☆423Jan 16, 2025Updated last year
- Fast, flexible LLM inference☆6,713Updated this week
- An OpenAI API compatible API for chat with image input and questions about the images. aka Multimodal.☆267Mar 6, 2025Updated last year
- AutoAWQ implements the AWQ algorithm for 4-bit quantization with a 2x speedup during inference. Documentation:☆2,317May 11, 2025Updated 10 months ago
- The llama-cpp-agent framework is a tool designed for easy interaction with Large Language Models (LLMs). Allowing users to chat with LLM …☆622Mar 9, 2026Updated last week
- Enforce the output format (JSON Schema, Regex etc) of a language model☆1,994Aug 24, 2025Updated 6 months ago
- WilmerAI is one of the oldest LLM semantic routers. It uses multi-layer prompt routing and complex workflows to allow you to not only cre…☆806Feb 9, 2026Updated last month
- Formatron empowers everyone to control the format of language models' output with minimal overhead.☆234Jun 7, 2025Updated 9 months ago
- Create Custom LLMs☆1,820Nov 8, 2025Updated 4 months ago