quic / aimet-model-zoo
☆312Updated last year
Alternatives and similar repositories for aimet-model-zoo:
Users that are interested in aimet-model-zoo are comparing it to the libraries listed below
- A parser, editor and profiler tool for ONNX models.☆414Updated 2 weeks ago
- AIMET is a library that provides advanced quantization and compression techniques for trained neural network models.☆2,202Updated this week
- Model Quantization Benchmark☆783Updated last week
- PyTorch implementation of Data Free Quantization Through Weight Equalization and Bias Correction.☆260Updated last year
- ☆197Updated 3 years ago
- A simple network quantization demo using pytorch from scratch.☆518Updated last year
- A code generator from ONNX to PyTorch code☆135Updated 2 years ago
- Quantization library for PyTorch. Support low-precision and mixed-precision quantization, with hardware implementation through TVM.☆422Updated last year
- OTOv1-v3, NeurIPS, ICLR, TMLR, DNN Training, Compression, Structured Pruning, Erasing Operators, CNN, Diffusion, LLM☆296Updated 4 months ago
- Quantization of Convolutional Neural networks.☆243Updated 5 months ago
- [CVPR'20] ZeroQ: A Novel Zero Shot Quantization Framework☆275Updated last year
- Inference of quantization aware trained networks using TensorRT☆80Updated 2 years ago
- PyTorch Quantization Aware Training Example☆127Updated 8 months ago
- Pytorch implementation of BRECQ, ICLR 2021☆261Updated 3 years ago
- FakeQuantize with Learned Step Size(LSQ+) as Observer in PyTorch☆33Updated 3 years ago
- ☆132Updated last year
- TinyNeuralNetwork is an efficient and easy-to-use deep learning model compression framework.☆776Updated last month
- Transform ONNX model to PyTorch representation☆324Updated 2 months ago
- PyTorch implementation for the APoT quantization (ICLR 2020)☆270Updated last month
- A set of simple tools for splitting, merging, OP deletion, size compression, rewriting attributes and constants, OP generation, change op…☆283Updated 9 months ago
- Offline Quantization Tools for Deploy.☆122Updated last year
- [IJCAI 2022] FQ-ViT: Post-Training Quantization for Fully Quantized Vision Transformer☆320Updated last year
- EasyQuant(EQ) is an efficient and simple post-training quantization method via effectively optimizing the scales of weights and activatio…☆393Updated 2 years ago
- ☆221Updated 2 years ago
- Neural Network Compression Framework for enhanced OpenVINO™ inference☆968Updated this week
- TFLite model analyzer & memory optimizer☆121Updated last year
- Model Compression Toolkit (MCT) is an open source project for neural network model optimization under efficient, constrained hardware. Th…☆348Updated this week
- ☆120Updated last month
- benchmark for embededded-ai deep learning inference engines, such as NCNN / TNN / MNN / TensorFlow Lite etc.☆204Updated 3 years ago
- Unofficial implementation of LSQ-Net, a neural network quantization framework☆286Updated 8 months ago