An optimized quantization and inference library for running LLMs locally on modern consumer-class GPUs
☆809Apr 27, 2026Updated this week
Alternatives and similar repositories for exllamav3
Users that are interested in exllamav3 are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- The official API server for Exllama. OAI compatible, lightweight, and fast.☆1,197Apr 24, 2026Updated last week
- A fast inference library for running LLMs locally on modern consumer-class GPUs☆4,511Mar 4, 2026Updated last month
- ☆94Mar 28, 2026Updated last month
- llama.cpp fork with additional SOTA quants and improved performance☆2,276Updated this week
- Web UI for ExLlamaV2☆511Feb 5, 2025Updated last year
- Wordpress hosting with auto-scaling - Free Trial Offer • AdFully Managed hosting for WordPress and WooCommerce businesses that need reliable, auto-scalable performance. Cloudways SafeUpdates now available.
- ☆169Jun 22, 2025Updated 10 months ago
- Large-scale LLM inference engine☆1,714Updated this week
- Produce your own Dynamic 3.0 Quants and achieve optimum accuracy & SOTA quantization performance! Input a target size and the toolchain w…☆120Updated this week
- ☆76Jun 20, 2025Updated 10 months ago
- Prompt Jinja2 templates for LLMs☆35Jul 9, 2025Updated 9 months ago
- Croco.Cpp is fork of KoboldCPP infering GGML/GGUF models on CPU/Cuda with KoboldAI's UI. It's powered partly by IK_LLama.cpp, and compati…☆169Apr 23, 2026Updated last week
- Yet Another (LLM) Web UI, made with Gemini☆12Dec 25, 2024Updated last year
- ☆64Jul 10, 2025Updated 9 months ago
- Transplants vocabulary between language models, enabling the creation of draft models for speculative decoding WITHOUT retraining.☆52Oct 29, 2025Updated 6 months ago
- Managed Database hosting by DigitalOcean • AdPostgreSQL, MySQL, MongoDB, Kafka, Valkey, and OpenSearch available. Automatically scale up storage and focus on building your apps.
- A simple Gradio WebUI for loading/unloading models and loras in tabbyAPI.☆20Nov 21, 2024Updated last year
- A multimodal, function calling powered LLM webui.☆213Sep 23, 2024Updated last year
- A more memory-efficient rewrite of the HF transformers implementation of Llama for use with quantized weights.☆2,915Sep 30, 2023Updated 2 years ago
- ik_llama.cpp's Thireus fork with release builds for macOS/Windows/Ubuntu CPU, Vulkan and CUDA☆118Updated this week
- REAP: Router-weighted Expert Activation Pruning for SMoE compression☆347Apr 17, 2026Updated 2 weeks ago
- LLM model quantization (compression) toolkit with HW acceleration support for Nvidia, AMD, Intel GPU and Intel/AMD/Apple CPU via HF, vLLM…☆1,133Updated this week
- An extension to Oobabooga to add a simple memory function for chat☆25Jun 5, 2023Updated 2 years ago
- Modified Beam Search with periodical restart☆12Sep 12, 2024Updated last year
- ☆56Oct 10, 2025Updated 6 months ago
- Deploy on Railway without the complexity - Free Credits Offer • AdConnect your repo and Railway handles the rest with instant previews. Quickly provision container image services, databases, and storage volumes.
- An unsupervised model merging algorithm for Transformers-based language models.☆108Apr 29, 2024Updated 2 years ago
- Run GGUF models easily with a KoboldAI UI. One File. Zero Install.☆10,323Apr 26, 2026Updated last week
- Official implementation of Half-Quadratic Quantization (HQQ)☆931Feb 26, 2026Updated 2 months ago
- LLM Frontend in a single html file☆721Dec 27, 2025Updated 4 months ago
- Transformers-compatible library for applying various compression algorithms to LLMs for optimized deployment with vLLM☆3,169Updated this week
- Reliable model swapping for any local OpenAI/Anthropic compatible server - llama.cpp, vllm, etc☆3,772Updated this week
- A SOTA quantization algorithm for high-accuracy low-bit LLM inference, seamlessly optimized for CPU/XPU/CUDA, with multi-datatype support…☆1,068Updated this week
- An OpenAI API compatible LLM inference server based on ExLlamaV2.☆25Feb 9, 2024Updated 2 years ago
- The TinyLlama project is an open endeavor to pretrain a 1.1B Llama model on 3 trillion tokens.☆14Mar 30, 2024Updated 2 years ago
- Deploy to Railway using AI coding agents - Free Credits Offer • AdUse Claude Code, Codex, OpenCode, and more. Autonomous software development now has the infrastructure to match with Railway.
- Optimizing inference proxy for LLMs☆3,440Mar 19, 2026Updated last month
- Customizable implementation of the self-instruct paper.☆1,053Mar 7, 2024Updated 2 years ago
- A fast batching API to serve LLM models☆189Apr 26, 2024Updated 2 years ago
- ☆136Apr 8, 2026Updated 3 weeks ago
- A open webui function for better R1 experience☆77Mar 7, 2025Updated last year
- A stable, fast and easy-to-use inference library with a focus on a sync-to-async API☆48Sep 26, 2024Updated last year
- ☆111Aug 21, 2025Updated 8 months ago