A more memory-efficient rewrite of the HF transformers implementation of Llama for use with quantized weights.
☆2,913Sep 30, 2023Updated 2 years ago
Alternatives and similar repositories for exllama
Users that are interested in exllama are comparing it to the libraries listed below
Sorting:
- A fast inference library for running LLMs locally on modern consumer-class GPUs☆4,468Mar 4, 2026Updated 2 weeks ago
- An easy-to-use LLMs quantization package with user-friendly apis, based on GPTQ algorithm.☆5,034Apr 11, 2025Updated 11 months ago
- 4 bits quantization of LLaMA using GPTQ☆3,073Jul 13, 2024Updated last year
- Large Language Model Text Generation Inference☆10,812Jan 8, 2026Updated 2 months ago
- LLMs build upon Evol Insturct: WizardLM, WizardCoder, WizardMath☆9,478Jun 7, 2025Updated 9 months ago
- QLoRA: Efficient Finetuning of Quantized LLMs☆10,858Jun 10, 2024Updated last year
- The original local LLM interface. Text, vision, tool-calling, training, and more. 100% offline.☆46,278Updated this week
- Python bindings for the Transformer models implemented in C/C++ using GGML library.☆1,883Jan 28, 2024Updated 2 years ago
- Customizable implementation of the self-instruct paper.☆1,050Mar 7, 2024Updated 2 years ago
- AutoAWQ implements the AWQ algorithm for 4-bit quantization with a 2x speedup during inference. Documentation:☆2,317May 11, 2025Updated 10 months ago
- [MLSys 2024 Best Paper Award] AWQ: Activation-aware Weight Quantization for LLM Compression and Acceleration☆3,463Jul 17, 2025Updated 8 months ago
- Go ahead and axolotl questions☆11,460Updated this week
- Python bindings for llama.cpp☆10,058Aug 15, 2025Updated 7 months ago
- Accessible large language models via k-bit quantization for PyTorch.☆8,052Updated this week
- Universal LLM Deployment Engine with ML Compilation☆22,246Updated this week
- Medusa: Simple Framework for Accelerating LLM Generation with Multiple Decoding Heads☆2,719Jun 25, 2024Updated last year
- [ICML 2024] SqueezeLLM: Dense-and-Sparse Quantization☆713Aug 13, 2024Updated last year
- Code for the ICLR 2023 paper "GPTQ: Accurate Post-training Quantization of Generative Pretrained Transformers".☆2,266Mar 27, 2024Updated last year
- A high-throughput and memory-efficient inference and serving engine for LLMs☆73,479Updated this week
- Tools for merging pretrained large language models.☆6,867Mar 15, 2026Updated last week
- [ICLR 2024] Efficient Streaming Language Models with Attention Sinks☆7,201Jul 11, 2024Updated last year
- OpenLLaMA, a permissively licensed open source reproduction of Meta AI’s LLaMA 7B trained on the RedPajama dataset☆7,537Jul 16, 2023Updated 2 years ago
- ☆535Dec 1, 2023Updated 2 years ago
- Tensor library for machine learning☆14,252Updated this week
- An open platform for training, serving, and evaluating large language models. Release repo for Vicuna and Chatbot Arena.☆39,428Jun 2, 2025Updated 9 months ago
- Fast inference engine for Transformer models☆4,368Feb 4, 2026Updated last month
- The official API server for Exllama. OAI compatible, lightweight, and fast.☆1,154Mar 13, 2026Updated last week
- Running large language models on a single GPU for throughput-oriented scenarios.☆9,380Oct 28, 2024Updated last year
- LLM inference in C/C++☆98,098Updated this week
- A guidance language for controlling large language models.☆21,346Mar 13, 2026Updated last week
- A more memory-efficient rewrite of the HF transformers implementation of Llama for use with quantized weights.☆64Oct 13, 2023Updated 2 years ago
- Implementation of the LLaMA language model based on nanoGPT. Supports flash attention, Int8 and GPTQ 4bit quantization, LoRA and LLaMA-Ad…☆6,082Jul 1, 2025Updated 8 months ago
- Large-scale LLM inference engine☆1,677Mar 12, 2026Updated last week
- LMDeploy is a toolkit for compressing, deploying, and serving LLMs.☆7,694Mar 13, 2026Updated last week
- Fast and memory-efficient exact attention☆22,832Updated this week
- RWKV (pronounced RwaKuv) is an RNN with great LLM performance, which can also be directly trained like a GPT transformer (parallelizable)…☆14,419Mar 5, 2026Updated 2 weeks ago
- Instruct-tune LLaMA on consumer hardware☆18,961Jul 29, 2024Updated last year
- FP16xINT4 LLM inference kernel that can achieve near-ideal ~4x speedups up to medium batchsizes of 16-32 tokens.☆1,041Sep 4, 2024Updated last year
- 🌸 Run LLMs at home, BitTorrent-style. Fine-tuning and inference up to 10x faster than offloading☆10,020Sep 7, 2024Updated last year