intel / auto-roundLinks
π―An accuracy-first, highly efficient quantization toolkit for LLMs, designed to minimize quality degradation across Weight-Only Quantization, MXFP4, NVFP4, GGUF, and adaptive schemes.
β839Updated this week
Alternatives and similar repositories for auto-round
Users that are interested in auto-round are comparing it to the libraries listed below
Sorting:
- VPTQ, A Flexible and Extreme low-bit quantization algorithmβ674Updated 9 months ago
- LLM model quantization (compression) toolkit with hw acceleration support for Nvidia CUDA, AMD ROCm, Intel XPU and Intel/AMD/Apple CPU viβ¦β1,007Updated this week
- Official implementation of Half-Quadratic Quantization (HQQ)β912Updated last month
- FP16xINT4 LLM inference kernel that can achieve near-ideal ~4x speedups up to medium batchsizes of 16-32 tokens.β1,005Updated last year
- An innovative library for efficient LLM inference via low-bit quantizationβ352Updated last year
- [MLSys'25] QServe: W4A8KV4 Quantization and System Co-design for Efficient LLM Serving; [MLSys'25] LServe: Efficient Long-sequence LLM Seβ¦β810Updated 11 months ago
- [ACL 2025 Main] EfficientQAT: Efficient Quantization-Aware Training for Large Language Modelsβ327Updated 2 months ago
- A safetensors extension to efficiently store sparse quantized tensors on diskβ238Updated this week
- A throughput-oriented high-performance serving framework for LLMsβ945Updated 3 months ago
- Efficient LLM Inference over Long Sequencesβ394Updated 7 months ago
- [NeurIPS'24 Spotlight, ICLR'25, ICML'25] To speed up Long-context LLMs' inference, approximate and dynamic sparse calculate the attentionβ¦β1,180Updated 4 months ago
- A family of compressed models obtained via pruning and knowledge distillationβ364Updated 3 months ago
- Transformers-compatible library for applying various compression algorithms to LLMs for optimized deployment with vLLMβ2,660Updated last week
- A high-throughput and memory-efficient inference and serving engine for LLMsβ267Updated 2 months ago
- DFloat11 [NeurIPS '25]: Lossless Compression of LLMs and DiTs for Efficient GPU Inferenceβ600Updated 2 months ago
- β577Updated last year
- For releasing code related to compression methods for transformers, accompanying our publicationsβ455Updated last year
- LLM KV cache compression made easyβ866Updated last week
- β206Updated 9 months ago
- ποΈ A unified multi-backend utility for benchmarking Transformers, Timm, PEFT, Diffusers and Sentence-Transformers with full support of Oβ¦β327Updated 4 months ago
- [ICLR2024 spotlight] OmniQuant is a simple and powerful quantization technique for LLMs.β888Updated 2 months ago
- A pytorch quantization backend for optimumβ1,022Updated 2 months ago
- Low-bit LLM inference on CPU/NPU with lookup tableβ916Updated 8 months ago
- ArcticInference: vLLM plugin for high-throughput, low-latency inferenceβ384Updated this week
- Code for Neurips24 paper: QuaRot, an end-to-end 4-bit inference of large language models.β480Updated last year
- [NeurIPS 2024] KVQuant: Towards 10 Million Context Length LLM Inference with KV Cache Quantizationβ402Updated last year
- Comparison of Language Model Inference Enginesβ239Updated last year
- Code repo for the paper "SpinQuant LLM quantization with learned rotations"β372Updated 11 months ago
- A high-performance inference system for large language models, designed for production environments.β491Updated last month
- A unified library for building, evaluating, and storing speculative decoding algorithms for LLM inference in vLLMβ220Updated this week