merrymercy / tvm-mali
Optimizing Mobile Deep Learning on ARM GPU with TVM
☆180Updated 6 years ago
Alternatives and similar repositories for tvm-mali:
Users that are interested in tvm-mali are comparing it to the libraries listed below
- Benchmark of TVM quantized model on CUDA☆111Updated 4 years ago
- A quick view of high-performance convolution neural networks (CNNs) inference engines on mobile devices.☆150Updated 2 years ago
- Heterogeneous Run Time version of MXNet. Added heterogeneous capabilities to the MXNet, uses heterogeneous computing infrastructure frame…☆72Updated 7 years ago
- Efficient Sparse-Winograd Convolutional Neural Networks (ICLR 2018)☆190Updated 5 years ago
- Simulate quantization and quantization aware training for MXNet-Gluon models.☆46Updated 4 years ago
- Caffe implementation of accurate low-precision neural networks☆117Updated 6 years ago
- Caffe for Sparse Convolutional Neural Network☆238Updated 2 years ago
- Tengine gemm tutorial, step by step☆12Updated 4 years ago
- Fork of https://source.codeaurora.org/quic/hexagon_nn/nnlib☆57Updated last year
- This is a CNN Analyzer tool, based on Netscope by dgschwend/netscope☆41Updated 7 years ago
- Simple pruning example using Caffe☆33Updated 7 years ago
- The benchmark of ncnn that is a high-performance neural network inference framework optimized for the mobile platform☆72Updated 6 years ago
- This repository contains the results and code for the MLPerf™ Inference v0.5 benchmark.☆55Updated last year
- tutorial to optimize GEMM performance on android☆51Updated 9 years ago
- Merge Batch Norm caffe☆64Updated 6 years ago
- Added quantization layer into caffe (support a coarse level fixed point simulation)☆22Updated 8 years ago
- Ristretto: Caffe-based approximation of convolutional neural networks.☆291Updated 5 years ago
- ☆27Updated 8 years ago
- This repository has moved. The new link can be obtained from https://github.com/TexasInstruments/jacinto-ai-devkit☆116Updated 4 years ago
- Generate a quantization parameter file for ncnn framework int8 inference☆519Updated 4 years ago
- Heterogeneous Run Time version of Caffe. Added heterogeneous capabilities to the Caffe, uses heterogeneous computing infrastructure frame…☆268Updated 6 years ago
- Binary Weight Network and XNOR Network.☆63Updated 8 years ago
- Demonstrate Plugin API for TensorRT2.1☆182Updated 7 years ago
- ☆26Updated 8 years ago
- ☆67Updated 5 years ago
- A pyCaffe implementaion of the 2017 ICLR's "Pruning Filters for Efficient ConvNets" publication☆43Updated 6 years ago
- benchmark for embededded-ai deep learning inference engines, such as NCNN / TNN / MNN / TensorFlow Lite etc.☆203Updated 4 years ago
- tophub autotvm log collections☆70Updated 2 years ago
- Caffe Implementation for Incremental network quantization☆191Updated 6 years ago
- ☆45Updated 2 years ago