VeriSilicon / acuity-models
Acuity Model Zoo
☆143Updated 2 years ago
Alternatives and similar repositories for acuity-models:
Users that are interested in acuity-models are comparing it to the libraries listed below
- VeriSilicon Tensor Interface Module☆234Updated 4 months ago
- benchmark for embededded-ai deep learning inference engines, such as NCNN / TNN / MNN / TensorFlow Lite etc.☆204Updated 4 years ago
- A nnie quantization aware training tool on pytorch.☆239Updated 4 years ago
- caffe model convert to onnx model☆175Updated 2 years ago
- Tengine gemm tutorial, step by step☆13Updated 4 years ago
- Zhouyi model zoo☆98Updated 7 months ago
- Simulate quantization and quantization aware training for MXNet-Gluon models.☆46Updated 5 years ago
- Optimizing Mobile Deep Learning on ARM GPU with TVM☆181Updated 6 years ago
- Tengine Convert Tool supports converting multi framworks' models into tmfile that suitable for Tengine-Lite AI framework.☆93Updated 3 years ago
- ☆81Updated 2 years ago
- Additions and patches to Caffe framework for use with Synopsys DesignWare EV Family of Processors☆22Updated 6 months ago
- Tencent NCNN with added CUDA support☆69Updated 4 years ago
- A quick view of high-performance convolution neural networks (CNNs) inference engines on mobile devices.☆150Updated 2 years ago
- Utility scripts for editing or modifying onnx models. Utility scripts to summarize onnx model files along with visualization for loop ope…☆79Updated 3 years ago
- Parallel CUDA implementation of NON maximum Suppression☆79Updated 4 years ago
- Fork of https://source.codeaurora.org/quic/hexagon_nn/nnlib☆57Updated 2 years ago
- ☆126Updated 4 years ago
- ☆147Updated 6 years ago
- ⚡️ Using NNIE as simple as using ncnn ⚡️☆186Updated 3 years ago
- DDK for Rockchip NPU☆61Updated 4 years ago
- Tensorflow Lite external delegate based on TIM-VX☆47Updated 4 months ago
- Generate a quantization parameter file for ncnn framework int8 inference☆519Updated 4 years ago
- Based of paper "Quantization and Training of Neural Networks for Efficient Integer-Arithmetic-Only Inference"☆63Updated 4 years ago
- quantize aware training package for NCNN on pytorch☆70Updated 3 years ago
- ncnn is a high-performance neural network inference framework optimized for the mobile platform☆14Updated 2 years ago
- ☆34Updated 11 months ago
- Inference of quantization aware trained networks using TensorRT☆80Updated 2 years ago
- This convert tools is base on TensorRT 2.0 Int8 calibration tools,which use the KL algorithm to find the suitable threshold to quantize t…☆27Updated 6 years ago
- fastercnn modules optimize☆2Updated last year
- A keras h5df to ncnn model converter☆89Updated 2 years ago