CAS-CLab / CNN-Inference-Engine-Quick-View
A quick view of high-performance convolution neural networks (CNNs) inference engines on mobile devices.
☆150Updated 2 years ago
Alternatives and similar repositories for CNN-Inference-Engine-Quick-View:
Users that are interested in CNN-Inference-Engine-Quick-View are comparing it to the libraries listed below
- Optimizing Mobile Deep Learning on ARM GPU with TVM☆181Updated 6 years ago
- Benchmark of TVM quantized model on CUDA☆111Updated 4 years ago
- This code is an implementation of a trained YOLO neural network used with the TensorRT framework.☆88Updated 8 years ago
- Tengine gemm tutorial, step by step☆13Updated 4 years ago
- Reproduction of MobileNetV2 using MXNet☆128Updated 6 years ago
- This repository has moved. The new link can be obtained from https://github.com/TexasInstruments/jacinto-ai-devkit☆116Updated 5 years ago
- This is a CNN Analyzer tool, based on Netscope by dgschwend/netscope☆41Updated 7 years ago
- Merge Batch Norm caffe☆64Updated 6 years ago
- ☆27Updated 8 years ago
- The benchmark of ncnn that is a high-performance neural network inference framework optimized for the mobile platform☆72Updated 6 years ago
- Use TensorRT API to implement Caffe-SSD, SSD(channel pruning), Mobilenet-SSD☆250Updated 6 years ago
- Convert MXNet model to Caffe model☆163Updated 6 years ago
- convert torch module to tensorrt network or tvm function☆88Updated 5 years ago
- Generate a quantization parameter file for ncnn framework int8 inference☆519Updated 4 years ago
- ☆38Updated 8 years ago
- Simulate quantization and quantization aware training for MXNet-Gluon models.☆46Updated 5 years ago
- Heterogeneous Run Time version of MXNet. Added heterogeneous capabilities to the MXNet, uses heterogeneous computing infrastructure frame…☆72Updated 7 years ago
- mobilenet 与darknet yolo☆97Updated 7 years ago
- A caffe implementation of mobilenet's depthwise convolution layer.☆145Updated 7 years ago
- ☆67Updated 5 years ago
- Simple pruning example using Caffe☆33Updated 7 years ago
- This repository contains the results and code for the MLPerf™ Inference v0.5 benchmark.☆55Updated last year
- Demonstrate Plugin API for TensorRT2.1☆182Updated 7 years ago
- A MXNet/Gluon implementation of MobileNetV2☆86Updated 7 years ago
- caffe model convert to onnx model☆175Updated 2 years ago
- Caffe re-implementation of ShuffleNet☆106Updated 7 years ago
- Added quantization layer into caffe (support a coarse level fixed point simulation)☆22Updated 8 years ago
- Neural Network Tools: Converter and Analyzer. For caffe, pytorch, draknet and so on.☆355Updated 4 years ago
- Parallel CUDA implementation of NON maximum Suppression☆79Updated 4 years ago
- Caffe implementation of ReLU6 Layer☆46Updated 7 years ago