A fast inference library for running LLMs locally on modern consumer-class GPUs
☆4,497Mar 4, 2026Updated last month
Alternatives and similar repositories for exllamav2
Users that are interested in exllamav2 are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- The official API server for Exllama. OAI compatible, lightweight, and fast.☆1,192Updated this week
- A more memory-efficient rewrite of the HF transformers implementation of Llama for use with quantized weights.☆2,916Sep 30, 2023Updated 2 years ago
- Web UI for ExLlamaV2☆511Feb 5, 2025Updated last year
- An optimized quantization and inference library for running LLMs locally on modern consumer-class GPUs☆784Updated this week
- Large-scale LLM inference engine☆1,705Updated this week
- End-to-end encrypted cloud storage - Proton Drive • AdSpecial offer: 40% Off Yearly / 80% Off First Month. Protect your most important files, photos, and documents from prying eyes.
- An easy-to-use LLMs quantization package with user-friendly apis, based on GPTQ algorithm.☆5,051Apr 11, 2025Updated last year
- Go ahead and axolotl questions☆11,737Updated this week
- Tools for merging pretrained large language models.☆6,991Mar 15, 2026Updated last month
- AutoAWQ implements the AWQ algorithm for 4-bit quantization with a 2x speedup during inference. Documentation:☆2,327May 11, 2025Updated 11 months ago
- Large Language Model Text Generation Inference☆10,841Mar 21, 2026Updated last month
- The original local LLM interface. Text, vision, tool-calling, training. UI + API, 100% offline and private.☆46,836Updated this week
- A high-throughput and memory-efficient inference and serving engine for LLMs☆77,531Updated this week
- Python bindings for llama.cpp☆10,212Apr 14, 2026Updated last week
- Universal LLM Deployment Engine with ML Compilation☆22,482Apr 14, 2026Updated last week
- GPUs on demand by Runpod - Special Offer Available • AdRun AI, ML, and HPC workloads on powerful cloud GPUs—without limits or wasted spend. Deploy GPUs in under a minute and pay by the second.
- SGLang is a high-performance serving framework for large language models and multimodal models.☆26,025Updated this week
- High-speed Large Language Model Serving for Local Deployment☆9,359Jan 24, 2026Updated 2 months ago
- [MLSys 2024 Best Paper Award] AWQ: Activation-aware Weight Quantization for LLM Compression and Acceleration☆3,503Jul 17, 2025Updated 9 months ago
- QLoRA: Efficient Finetuning of Quantized LLMs☆10,870Jun 10, 2024Updated last year
- 4 bits quantization of LLaMA using GPTQ☆3,072Jul 13, 2024Updated last year
- Run GGUF models easily with a KoboldAI UI. One File. Zero Install.☆10,190Updated this week
- LLM inference in C/C++☆104,862Updated this week
- The TinyLlama project is an open endeavor to pretrain a 1.1B Llama model on 3 trillion tokens.☆8,939May 3, 2024Updated last year
- Structured Outputs☆13,694Apr 16, 2026Updated last week
- GPU virtual machines on DigitalOcean Gradient AI • AdGet to production fast with high-performance AMD and NVIDIA GPUs you can spin up in seconds. The definition of operational simplicity.
- LLMs build upon Evol Insturct: WizardLM, WizardCoder, WizardMath☆9,474Jun 7, 2025Updated 10 months ago
- LMDeploy is a toolkit for compressing, deploying, and serving LLMs.☆7,797Updated this week
- Medusa: Simple Framework for Accelerating LLM Generation with Multiple Decoding Heads☆2,723Jun 25, 2024Updated last year
- Accessible large language models via k-bit quantization for PyTorch.☆8,149Updated this week
- Python bindings for the Transformer models implemented in C/C++ using GGML library.☆1,885Jan 28, 2024Updated 2 years ago
- TensorRT LLM provides users with an easy-to-use Python API to define Large Language Models (LLMs) and supports state-of-the-art optimizat…☆13,433Updated this week
- [ICLR 2024] Efficient Streaming Language Models with Attention Sinks☆7,219Jul 11, 2024Updated last year
- Multi-LoRA inference server that scales to 1000s of fine-tuned LLMs☆3,753May 21, 2025Updated 11 months ago
- Tensor library for machine learning☆14,459Apr 14, 2026Updated last week
- Wordpress hosting with auto-scaling - Free Trial Offer • AdFully Managed hosting for WordPress and WooCommerce businesses that need reliable, auto-scalable performance. Cloudways SafeUpdates now available.
- Fast and memory-efficient exact attention☆23,457Updated this week
- Customizable implementation of the self-instruct paper.☆1,052Mar 7, 2024Updated 2 years ago
- 🌸 Run LLMs at home, BitTorrent-style. Fine-tuning and inference up to 10x faster than offloading☆10,079Sep 7, 2024Updated last year
- Web UI for training and running open models like Gemma 4, Qwen3.5, DeepSeek, gpt-oss locally.☆62,269Updated this week
- Fast inference engine for Transformer models☆4,445Feb 4, 2026Updated 2 months ago
- A guidance language for controlling large language models.☆21,397Apr 10, 2026Updated last week
- Official Pytorch repository for Extreme Compression of Large Language Models via Additive Quantization https://arxiv.org/pdf/2401.06118.p…☆1,317Feb 26, 2026Updated last month