SOTA low-bit LLM quantization (INT8/FP8/MXFP8/INT4/MXFP4/NVFP4) & sparsity; leading model compression techniques on PyTorch, TensorFlow, and ONNX Runtime
☆2,623Apr 23, 2026Updated last week
Alternatives and similar repositories for neural-compressor
Users that are interested in neural-compressor are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- ⚡ Build your chatbot within minutes on your favorite device; offer SOTA compression techniques for LLMs; run LLMs efficiently on Intel Pl…☆2,178Oct 8, 2024Updated last year
- A Python package for extending the official PyTorch that can easily obtain performance on Intel platform☆2,011Mar 30, 2026Updated last month
- Neural Network Compression Framework for enhanced OpenVINO™ inference☆1,153Updated this week
- A SOTA quantization algorithm for high-accuracy low-bit LLM inference, seamlessly optimized for CPU/XPU/CUDA, with multi-datatype support…☆1,031Apr 23, 2026Updated last week
- [ICML 2023] SmoothQuant: Accurate and Efficient Post-Training Quantization for Large Language Models☆1,641Jul 12, 2024Updated last year
- Proton VPN Special Offer - Get 70% off • AdSpecial partner offer. Trusted by over 100 million users worldwide. Tested, Approved and Recommended by Experts.
- Code for the ICLR 2023 paper "GPTQ: Accurate Post-training Quantization of Generative Pretrained Transformers".☆2,293Mar 27, 2024Updated 2 years ago
- AIMET is a library that provides advanced quantization and compression techniques for trained neural network models.☆2,604Updated this week
- Accessible large language models via k-bit quantization for PyTorch.☆8,168Apr 20, 2026Updated last week
- [MLSys 2024 Best Paper Award] AWQ: Activation-aware Weight Quantization for LLM Compression and Acceleration☆3,512Jul 17, 2025Updated 9 months ago
- 🚀 Accelerate inference and training of 🤗 Transformers, Diffusers, TIMM and Sentence Transformers with easy to use hardware optimization…☆3,363Apr 15, 2026Updated 2 weeks ago
- Transformer related optimization, including BERT, GPT☆6,412Mar 27, 2024Updated 2 years ago
- ☆172Mar 9, 2023Updated 3 years ago
- A list of papers, docs, codes about model quantization. This repo is aimed to provide the info for model quantization research, we are co…☆2,360Updated this week
- An innovative library for efficient LLM inference via low-bit quantization☆352Aug 30, 2024Updated last year
- Managed Database hosting by DigitalOcean • AdPostgreSQL, MySQL, MongoDB, Kafka, Valkey, and OpenSearch available. Automatically scale up storage and focus on building your apps.
- A pytorch quantization backend for optimum☆1,038Apr 2, 2026Updated 3 weeks ago
- Model compression for ONNX☆101Mar 1, 2026Updated last month
- A general 2-8 bits quantization toolbox with GPTQ/AWQ/HQQ/VPTQ, and export to onnx/onnx-runtime easily.☆190Mar 23, 2026Updated last month
- [ICLR2024 spotlight] OmniQuant is a simple and powerful quantization technique for LLMs.☆891Nov 26, 2025Updated 5 months ago
- PyTorch native quantization and sparsity for training and inference☆2,796Updated this week
- oneAPI Deep Neural Network Library (oneDNN)☆3,984Updated this week
- 🤗 Optimum Intel: Accelerate inference with Intel optimization tools☆576Updated this week
- Sparsity-aware deep learning inference runtime for CPUs☆3,162Jun 2, 2025Updated 10 months ago
- [MLSys'25] QServe: W4A8KV4 Quantization and System Co-design for Efficient LLM Serving; [MLSys'25] LServe: Efficient Long-sequence LLM Se…☆834Mar 6, 2025Updated last year
- End-to-end encrypted cloud storage - Proton Drive • AdSpecial offer: 40% Off Yearly / 80% Off First Month. Protect your most important files, photos, and documents from prying eyes.
- Development repository for the Triton language and compiler☆19,040Updated this week
- PPL Quantization Tool (PPQ) is a powerful offline neural network quantization tool.☆1,793Mar 28, 2024Updated 2 years ago
- NVIDIA® TensorRT™ is an SDK for high-performance deep learning inference on NVIDIA GPUs. This repository contains the open source compone…☆12,947Apr 13, 2026Updated 2 weeks ago
- Model Quantization Benchmark☆865Apr 20, 2025Updated last year
- Fast and memory-efficient exact attention☆23,563Updated this week
- [ICML 2024] SqueezeLLM: Dense-and-Sparse Quantization☆718Aug 13, 2024Updated last year
- AITemplate is a Python framework which renders neural network into high performance CUDA/HIP C++ code. Specialized for FP16 TensorCore (N…☆4,718Apr 9, 2026Updated 3 weeks ago
- Intel® AI Reference Models: contains Intel optimizations for running deep learning workloads on Intel® Xeon® Scalable processors and Inte…☆731Feb 11, 2026Updated 2 months ago
- An easy-to-use LLMs quantization package with user-friendly apis, based on GPTQ algorithm.☆5,053Apr 11, 2025Updated last year
- Managed hosting for WordPress and PHP on Cloudways • AdManaged hosting for WordPress, Magento, Laravel, or PHP apps, on multiple cloud providers. Deploy in minutes on Cloudways by DigitalOcean.
- FP16xINT4 LLM inference kernel that can achieve near-ideal ~4x speedups up to medium batchsizes of 16-32 tokens.☆1,061Sep 4, 2024Updated last year
- A library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit and 4-bit floating point (FP8 and FP4) precision on H…☆3,291Updated this week
- TensorRT LLM provides users with an easy-to-use Python API to define Large Language Models (LLMs) and supports state-of-the-art optimizat…☆13,487Updated this week
- Transformers-compatible library for applying various compression algorithms to LLMs for optimized deployment with vLLM☆3,169Updated this week
- AutoAWQ implements the AWQ algorithm for 4-bit quantization with a 2x speedup during inference. Documentation:☆2,331May 11, 2025Updated 11 months ago
- DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.☆42,188Updated this week
- [CVPR 2023] DepGraph: Towards Any Structural Pruning; LLMs, Vision Foundation Models, etc.☆3,296Sep 7, 2025Updated 7 months ago