AIMET is a library that provides advanced quantization and compression techniques for trained neural network models.
☆2,572Mar 22, 2026Updated this week
Alternatives and similar repositories for aimet
Users that are interested in aimet are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- ☆340Feb 12, 2026Updated last month
- Model Quantization Benchmark☆862Apr 20, 2025Updated 11 months ago
- PyTorch implementation of Data Free Quantization Through Weight Equalization and Bias Correction.☆263Oct 3, 2023Updated 2 years ago
- PPL Quantization Tool (PPQ) is a powerful offline neural network quantization tool.☆1,787Mar 28, 2024Updated last year
- Neural Network Compression Framework for enhanced OpenVINO™ inference☆1,141Updated this week
- A list of papers, docs, codes about model quantization. This repo is aimed to provide the info for model quantization research, we are co…☆2,334Jan 29, 2026Updated last month
- Simplify your onnx model☆4,309Feb 26, 2026Updated 3 weeks ago
- Quantization library for PyTorch. Support low-precision and mixed-precision quantization, with hardware implementation through TVM.☆454May 15, 2023Updated 2 years ago
- Pytorch implementation of BRECQ, ICLR 2021☆292Aug 1, 2021Updated 4 years ago
- micronet, a model compression and deploy lib. compression: 1、quantization: quantization-aware-training(QAT), High-Bit(>2b)(DoReFa/Quantiz…☆2,270May 6, 2025Updated 10 months ago
- Brevitas: neural network quantization in PyTorch☆1,506Updated this week
- ☆210Nov 9, 2021Updated 4 years ago
- SOTA low-bit LLM quantization (INT8/FP8/MXFP8/INT4/MXFP4/NVFP4) & sparsity; leading model compression techniques on PyTorch, TensorFlow, …☆2,598Updated this week
- TinyNeuralNetwork is an efficient and easy-to-use deep learning model compression framework.☆873Mar 3, 2026Updated 2 weeks ago
- Open Machine Learning Compiler Framework☆13,218Updated this week
- [CVPR 2023] DepGraph: Towards Any Structural Pruning; LLMs, Vision Foundation Models, etc.☆3,275Sep 7, 2025Updated 6 months ago
- Improving Post Training Neural Quantization: Layer-wise Calibration and Integer Programming☆98Jun 10, 2021Updated 4 years ago
- A simple network quantization demo using pytorch from scratch.☆541Jun 18, 2023Updated 2 years ago
- NVIDIA® TensorRT™ is an SDK for high-performance deep learning inference on NVIDIA GPUs. This repository contains the open source compone…☆12,800Mar 9, 2026Updated 2 weeks ago
- [ICLR 2020] Once for All: Train One Network and Specialize it for Efficient Deployment☆1,943Dec 14, 2023Updated 2 years ago
- EasyQuant(EQ) is an efficient and simple post-training quantization method via effectively optimizing the scales of weights and activatio…☆407Nov 22, 2022Updated 3 years ago
- [CVPR'20] ZeroQ: A Novel Zero Shot Quantization Framework☆282Dec 8, 2023Updated 2 years ago
- A primitive library for neural network☆1,367Nov 24, 2024Updated last year
- PyTorch/TorchScript/FX compiler for NVIDIA GPUs using TensorRT☆2,959Updated this week
- ncnn is a high-performance neural network inference framework optimized for the mobile platform☆22,954Updated this week
- Quantized Neural Network PACKage - mobile-optimized implementation of quantized neural network operators☆1,549Aug 28, 2019Updated 6 years ago
- Low-precision matrix multiplication☆1,835Jan 29, 2024Updated 2 years ago
- A tool to modify ONNX models in a visualization fashion, based on Netron and Flask.☆1,617Nov 19, 2025Updated 4 months ago
- ☆57Dec 8, 2020Updated 5 years ago
- A curated list of neural network pruning resources.☆2,491Apr 4, 2024Updated last year
- PyTorch implementation for the APoT quantization (ICLR 2020)☆287Dec 11, 2024Updated last year
- Code for the ICLR 2023 paper "GPTQ: Accurate Post-training Quantization of Generative Pretrained Transformers".☆2,266Mar 27, 2024Updated last year
- High-efficiency floating-point neural network inference operators for mobile, server, and Web☆2,276Updated this week
- A tool for parsing, editing, optimizing, and profiling ONNX models.☆482Mar 11, 2026Updated last week
- MNN: A blazing-fast, lightweight inference engine battle-tested by Alibaba, powering high-performance on-device LLMs and Edge AI.☆14,618Updated this week
- Quantization of Convolutional Neural networks.☆250Aug 5, 2024Updated last year
- Bolt is a deep learning library with high performance and heterogeneous flexibility.☆956Apr 11, 2025Updated 11 months ago
- Transformer related optimization, including BERT, GPT☆6,400Mar 27, 2024Updated last year
- [ICML 2023] SmoothQuant: Accurate and Efficient Post-Training Quantization for Large Language Models☆1,625Jul 12, 2024Updated last year