NVIDIA / sampleQATLinks
Inference of quantization aware trained networks using TensorRT
☆83Updated 2 years ago
Alternatives and similar repositories for sampleQAT
Users that are interested in sampleQAT are comparing it to the libraries listed below
Sorting:
- Count number of parameters / MACs / FLOPS for ONNX models.☆95Updated last year
- PyTorch Quantization Aware Training Example☆146Updated last year
- PyTorch implementation of Data Free Quantization Through Weight Equalization and Bias Correction.☆264Updated 2 years ago
- FakeQuantize with Learned Step Size(LSQ+) as Observer in PyTorch☆37Updated 4 years ago
- Offline Quantization Tools for Deploy.☆141Updated last year
- A code generator from ONNX to PyTorch code☆141Updated 3 years ago
- ☆68Updated 2 years ago
- benchmark for embededded-ai deep learning inference engines, such as NCNN / TNN / MNN / TensorFlow Lite etc.☆204Updated 4 years ago
- A parser, editor and profiler tool for ONNX models.☆469Updated last month
- ☆37Updated 3 years ago
- EasyQuant(EQ) is an efficient and simple post-training quantization method via effectively optimizing the scales of weights and activatio…☆405Updated 3 years ago
- ☆44Updated 4 years ago
- Improving Post Training Neural Quantization: Layer-wise Calibration and Integer Programming☆98Updated 4 years ago
- Benchmark of TVM quantized model on CUDA☆112Updated 5 years ago
- ☆243Updated 3 years ago
- A set of examples around MegEngine☆31Updated 2 years ago
- PyTorch Static Quantization Example☆38Updated 4 years ago
- Benchmark scripts for TVM☆74Updated 3 years ago
- ☆168Updated 2 years ago
- A sample for onnxparser working with trt user defined plugins for TRT7.0☆170Updated 5 years ago
- Graph Transforms to Quantize and Retrain Deep Neural Nets in TensorFlow☆168Updated 6 years ago
- quantize aware training package for NCNN on pytorch☆69Updated 4 years ago
- TensorRT Plugin Autogen Tool☆367Updated 2 years ago
- tophub autotvm log collections☆69Updated 2 years ago
- [CVPR'20] ZeroQ: A Novel Zero Shot Quantization Framework☆279Updated 2 years ago
- A DNN inference latency prediction toolkit for accurately modeling and predicting the latency on diverse edge devices.☆358Updated last year
- NART = NART is not A RunTime, a deep learning inference framework.☆37Updated 2 years ago
- ☆98Updated 4 years ago
- symmetric int8 gemm☆67Updated 5 years ago
- This repository contains the results and code for the MLPerf™ Inference v0.5 benchmark.☆55Updated 4 months ago