An optimized quantization and inference library for running LLMs locally on modern consumer-class GPUs
☆753Apr 4, 2026Updated last week
Alternatives and similar repositories for exllamav3
Users that are interested in exllamav3 are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- The official API server for Exllama. OAI compatible, lightweight, and fast.☆1,175Updated this week
- A fast inference library for running LLMs locally on modern consumer-class GPUs☆4,493Mar 4, 2026Updated last month
- ☆94Mar 28, 2026Updated 2 weeks ago
- llama.cpp fork with additional SOTA quants and improved performance☆2,026Updated this week
- Web UI for ExLlamaV2☆511Feb 5, 2025Updated last year
- End-to-end encrypted email - Proton Mail • AdSpecial offer: 40% Off Yearly / 80% Off First Month. All Proton services are open source and independently audited for security.
- ☆167Jun 22, 2025Updated 9 months ago
- Large-scale LLM inference engine☆1,686Mar 12, 2026Updated last month
- ☆74Jun 20, 2025Updated 9 months ago
- Croco.Cpp is fork of KoboldCPP infering GGML/GGUF models on CPU/Cuda with KoboldAI's UI. It's powered partly by IK_LLama.cpp, and compati…☆165Apr 4, 2026Updated last week
- Yet Another (LLM) Web UI, made with Gemini☆12Dec 25, 2024Updated last year
- ☆64Jul 10, 2025Updated 9 months ago
- Transplants vocabulary between language models, enabling the creation of draft models for speculative decoding WITHOUT retraining.☆50Oct 29, 2025Updated 5 months ago
- A simple Gradio WebUI for loading/unloading models and loras in tabbyAPI.☆20Nov 21, 2024Updated last year
- A more memory-efficient rewrite of the HF transformers implementation of Llama for use with quantized weights.☆2,914Sep 30, 2023Updated 2 years ago
- End-to-end encrypted email - Proton Mail • AdSpecial offer: 40% Off Yearly / 80% Off First Month. All Proton services are open source and independently audited for security.
- A multimodal, function calling powered LLM webui.☆214Sep 23, 2024Updated last year
- ik_llama.cpp's Thireus fork with release builds for macOS/Windows/Ubuntu CPU, Vulkan and CUDA☆94Updated this week
- REAP: Router-weighted Expert Activation Pruning for SMoE compression☆320Updated this week
- LLM model quantization (compression) toolkit with HW acceleration support for Nvidia, AMD, Intel GPU and Intel/AMD/Apple CPU via HF, vLLM…☆1,101Updated this week
- Modified Beam Search with periodical restart☆12Sep 12, 2024Updated last year
- An extension to Oobabooga to add a simple memory function for chat☆25Jun 5, 2023Updated 2 years ago
- ☆55Oct 10, 2025Updated 6 months ago
- Run GGUF models easily with a KoboldAI UI. One File. Zero Install.☆9,968Apr 6, 2026Updated last week
- An unsupervised model merging algorithm for Transformers-based language models.☆108Apr 29, 2024Updated last year
- GPU virtual machines on DigitalOcean Gradient AI • AdGet to production fast with high-performance AMD and NVIDIA GPUs you can spin up in seconds. The definition of operational simplicity.
- Official implementation of Half-Quadratic Quantization (HQQ)☆925Feb 26, 2026Updated last month
- LLM Frontend in a single html file☆716Dec 27, 2025Updated 3 months ago
- Reliable model swapping for any local OpenAI/Anthropic compatible server - llama.cpp, vllm, etc☆3,212Updated this week
- Transformers-compatible library for applying various compression algorithms to LLMs for optimized deployment with vLLM☆2,996Updated this week
- SOTA rounding-based quantization for high-accuracy low-bit LLM inference, seamlessly optimized for CPU/XPU/CUDA, with multi-datatype supp…☆957Updated this week
- An OpenAI API compatible LLM inference server based on ExLlamaV2.☆25Feb 9, 2024Updated 2 years ago
- The TinyLlama project is an open endeavor to pretrain a 1.1B Llama model on 3 trillion tokens.☆14Mar 30, 2024Updated 2 years ago
- Optimizing inference proxy for LLMs☆3,411Mar 19, 2026Updated 3 weeks ago
- Customizable implementation of the self-instruct paper.☆1,050Mar 7, 2024Updated 2 years ago
- NordVPN Threat Protection Pro™ • AdTake your cybersecurity to the next level. Block phishing, malware, trackers, and ads. Lightweight app that works with all browsers.
- A fast batching API to serve LLM models☆189Apr 26, 2024Updated last year
- ☆135Updated this week
- A open webui function for better R1 experience☆77Mar 7, 2025Updated last year
- A stable, fast and easy-to-use inference library with a focus on a sync-to-async API☆48Sep 26, 2024Updated last year
- ☆110Aug 21, 2025Updated 7 months ago
- Official Pytorch repository for Extreme Compression of Large Language Models via Additive Quantization https://arxiv.org/pdf/2401.06118.p…☆1,314Feb 26, 2026Updated last month
- Test your local LLMs on the AIME problems☆34Jun 7, 2025Updated 10 months ago