AIMET is a library that provides advanced quantization and compression techniques for trained neural network models.
☆2,604Apr 24, 2026Updated last week
Alternatives and similar repositories for aimet
Users that are interested in aimet are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- ☆343Feb 12, 2026Updated 2 months ago
- Model Quantization Benchmark☆866Apr 20, 2025Updated last year
- PyTorch implementation of Data Free Quantization Through Weight Equalization and Bias Correction.☆263Oct 3, 2023Updated 2 years ago
- PPL Quantization Tool (PPQ) is a powerful offline neural network quantization tool.☆1,793Mar 28, 2024Updated 2 years ago
- Neural Network Compression Framework for enhanced OpenVINO™ inference☆1,153Apr 24, 2026Updated last week
- AI Agents on DigitalOcean Gradient AI Platform • AdBuild production-ready AI agents using customizable tools or access multiple LLMs through a single endpoint. Create custom knowledge bases or connect external data.
- A list of papers, docs, codes about model quantization. This repo is aimed to provide the info for model quantization research, we are co…☆2,360Apr 25, 2026Updated last week
- Simplify your onnx model☆4,328Updated this week
- Quantization library for PyTorch. Support low-precision and mixed-precision quantization, with hardware implementation through TVM.☆461May 15, 2023Updated 2 years ago
- Pytorch implementation of BRECQ, ICLR 2021☆297Aug 1, 2021Updated 4 years ago
- micronet, a model compression and deploy lib. compression: 1、quantization: quantization-aware-training(QAT), High-Bit(>2b)(DoReFa/Quantiz…☆2,271May 6, 2025Updated 11 months ago
- Brevitas: neural network quantization in PyTorch☆1,524Apr 23, 2026Updated last week
- ☆210Nov 9, 2021Updated 4 years ago
- SOTA low-bit LLM quantization (INT8/FP8/MXFP8/INT4/MXFP4/NVFP4) & sparsity; leading model compression techniques on PyTorch, TensorFlow, …☆2,628Updated this week
- TinyNeuralNetwork is an efficient and easy-to-use deep learning model compression framework.