lcskrishna / onnx-parserLinks
ONNX Parser is a tool that automatically generates openvx inference code (CNN) from onnx binary model files.
☆18Updated 6 years ago
Alternatives and similar repositories for onnx-parser
Users that are interested in onnx-parser are comparing it to the libraries listed below
Sorting:
- Test winograd convolution written in TVM for CUDA and AMDGPU☆41Updated 6 years ago
- This is a PyTorch implementation of the Scalpel. Node pruning for five benchmark networks and SIMD-aware weight pruning for LeNet-300-100…☆41Updated 6 years ago
- OpenCL implementation of a NN and CNN☆22Updated 7 years ago
- PyTorch -> ONNX -> TVM for autotuning☆24Updated 5 years ago
- Training neural networks with 8-bit computations☆28Updated 9 years ago
- LCNN: Lookup-based Convolutional Neural Network☆52Updated 7 years ago
- Implementation of convolution layer in different flavors☆68Updated 7 years ago
- A prototype implementation of AllReduce collective communication routine.☆19Updated 6 years ago
- CNNs in Halide☆23Updated 9 years ago
- Library for fast image convolution in neural networks on Intel Architecture☆30Updated 8 years ago
- Lightweight C implementation of CNNs for Embedded Systems☆61Updated 2 years ago
- ☆62Updated 7 years ago
- Greentea LibDNN - a universal convolution implementation supporting CUDA and OpenCL☆136Updated 8 years ago
- ☆19Updated last year
- ☆13Updated 8 years ago
- DNN Inference with CPU, C++, ONNX support: Instant☆56Updated 6 years ago
- Efficient forward propagation for BCNNs☆50Updated 8 years ago
- Accelerating DNN Convolutional Layers with Micro-batches☆63Updated 5 years ago
- Fast matrix multiplication for few-bit integer matrices on CPUs.☆28Updated 6 years ago
- Fast binary matrix product on CPU☆10Updated 9 years ago
- SqueezeNet Generator☆31Updated 7 years ago
- Code for High-Capacity Expert Binary Networks (ICLR 2021).☆27Updated 3 years ago
- tutorial to optimize GEMM performance on android☆51Updated 9 years ago
- Binarized Neural Network☆9Updated 8 years ago
- An Example of MXNet Models Comilation and Deployment with NNVM in C++☆16Updated 7 years ago
- [ECCV18] Constraint-Aware Deep Neural Network Compression☆12Updated 6 years ago
- Training Low-bits DNNs with Stochastic Quantization☆74Updated 7 years ago
- Quantized Neural Networks - networks trained for inference at arbitrary low precision.☆146Updated 7 years ago
- Bridging caffe2 with yolo, esp on mobile devices☆16Updated 8 years ago
- RidgeRun Inference Framework☆27Updated 2 years ago