quic / aimet-model-zooLinks
☆338Updated last year
Alternatives and similar repositories for aimet-model-zoo
Users that are interested in aimet-model-zoo are comparing it to the libraries listed below
Sorting:
- A parser, editor and profiler tool for ONNX models.☆462Updated this week
- TinyNeuralNetwork is an efficient and easy-to-use deep learning model compression framework.☆853Updated 2 months ago
- PyTorch Quantization Aware Training Example☆143Updated last year
- ☆165Updated 4 months ago
- Model Compression Toolkit (MCT) is an open source project for neural network model optimization under efficient, constrained hardware. Th…☆420Updated this week
- Inference of quantization aware trained networks using TensorRT☆83Updated 2 years ago
- FakeQuantize with Learned Step Size(LSQ+) as Observer in PyTorch☆36Updated 3 years ago
- PyTorch implementation of Data Free Quantization Through Weight Equalization and Bias Correction.☆263Updated 2 years ago
- A simple network quantization demo using pytorch from scratch.☆538Updated 2 years ago
- Model Quantization Benchmark☆847Updated 6 months ago
- AIMET is a library that provides advanced quantization and compression techniques for trained neural network models.☆2,492Updated this week
- Offline Quantization Tools for Deploy.☆141Updated last year
- A set of simple tools for splitting, merging, OP deletion, size compression, rewriting attributes and constants, OP generation, change op…☆298Updated last year
- Count number of parameters / MACs / FLOPS for ONNX models.☆94Updated last year
- Transform ONNX model to PyTorch representation☆341Updated last week
- A code generator from ONNX to PyTorch code☆141Updated 2 years ago
- Conversion of PyTorch Models into TFLite☆393Updated 2 years ago
- Pytorch implementation of BRECQ, ICLR 2021☆284Updated 4 years ago
- Quantization library for PyTorch. Support low-precision and mixed-precision quantization, with hardware implementation through TVM.☆451Updated 2 years ago
- ☆206Updated 4 years ago
- Neural Network Compression Framework for enhanced OpenVINO™ inference☆1,098Updated this week
- ONNX Optimizer☆769Updated last week
- TFLite model analyzer & memory optimizer☆132Updated last year
- benchmark for embededded-ai deep learning inference engines, such as NCNN / TNN / MNN / TensorFlow Lite etc.☆204Updated 4 years ago
- A DNN inference latency prediction toolkit for accurately modeling and predicting the latency on diverse edge devices.☆360Updated last year
- [CVPR'20] ZeroQ: A Novel Zero Shot Quantization Framework☆280Updated last year
- A model compression and acceleration toolbox based on pytorch.☆331Updated last year
- OTOv1-v3, NeurIPS, ICLR, TMLR, DNN Training, Compression, Structured Pruning, Erasing Operators, CNN, Diffusion, LLM☆309Updated last year
- VeriSilicon Tensor Interface Module☆238Updated 3 weeks ago
- Common utilities for ONNX converters☆283Updated 2 months ago