791136190 / awesome-qat
☆21Updated 2 years ago
Alternatives and similar repositories for awesome-qat:
Users that are interested in awesome-qat are comparing it to the libraries listed below
- ONNX2Pytorch☆160Updated 3 years ago
- Tengine Convert Tool supports converting multi framworks' models into tmfile that suitable for Tengine-Lite AI framework.☆92Updated 3 years ago
- arm-neon☆90Updated 7 months ago
- Offline Quantization Tools for Deploy.☆124Updated last year
- quantize aware training package for NCNN on pytorch☆70Updated 3 years ago
- base quantization methods including: QAT, PTQ, per_channel, per_tensor, dorefa, lsq, adaround, omse, Histogram, bias_correction.etc☆42Updated 2 years ago
- symmetric int8 gemm☆66Updated 4 years ago
- A nnie quantization aware training tool on pytorch.☆239Updated 4 years ago
- Everything in Torch Fx☆342Updated 9 months ago
- One stage object detection model based on YOLOv3, written in PyTorch.☆9Updated 2 years ago
- A simple tutorial of SNPE.☆167Updated last year
- EasyQuant(EQ) is an efficient and simple post-training quantization method via effectively optimizing the scales of weights and activatio…☆394Updated 2 years ago
- MegEngine到其他框架的转换器☆69Updated last year
- An 8bit automated quantization conversion tool for the pytorch (Post-training quantization based on KL divergence)☆33Updated 5 years ago
- ☆97Updated 3 years ago
- A set of examples around MegEngine☆31Updated last year
- ☆95Updated 3 years ago
- ☆19Updated 3 years ago
- Inference of quantization aware trained networks using TensorRT☆80Updated 2 years ago
- benchmark for embededded-ai deep learning inference engines, such as NCNN / TNN / MNN / TensorFlow Lite etc.☆203Updated 4 years ago
- PyTorch Quantization Aware Training Example☆130Updated 9 months ago
- Based of paper "Quantization and Training of Neural Networks for Efficient Integer-Arithmetic-Only Inference"☆62Updated 4 years ago
- Improving Post Training Neural Quantization: Layer-wise Calibration and Integer Programming☆96Updated 3 years ago
- ☆44Updated 3 months ago
- ☆79Updated 4 years ago
- NART = NART is not A RunTime, a deep learning inference framework.☆38Updated 2 years ago
- ☆71Updated 2 years ago
- Tengine gemm tutorial, step by step☆12Updated 4 years ago
- ☆21Updated 4 years ago
- Simulate quantization and quantization aware training for MXNet-Gluon models.☆46Updated 4 years ago