XiaoMi / mobile-ai-bench
Benchmarking Neural Network Inference on Mobile Devices
☆370Updated 2 years ago
Alternatives and similar repositories for mobile-ai-bench:
Users that are interested in mobile-ai-bench are comparing it to the libraries listed below
- Mobile AI Compute Engine Model Zoo☆377Updated 3 years ago
- benchmark for embededded-ai deep learning inference engines, such as NCNN / TNN / MNN / TensorFlow Lite etc.☆204Updated 4 years ago
- Daquexian's NNAPI Library. ONNX + Android NNAPI☆350Updated 5 years ago
- Generate a quantization parameter file for ncnn framework int8 inference☆519Updated 4 years ago
- A quick view of high-performance convolution neural networks (CNNs) inference engines on mobile devices.☆150Updated 2 years ago
- Acuity Model Zoo☆142Updated 2 years ago
- Optimizing Mobile Deep Learning on ARM GPU with TVM☆181Updated 6 years ago
- WeChat: NeuralTalk,Weekly report and awesome list of embedded-ai.☆379Updated 2 years ago
- EasyQuant(EQ) is an efficient and simple post-training quantization method via effectively optimizing the scales of weights and activatio…☆397Updated 2 years ago
- Pytorch model to caffe & ncnn☆394Updated 6 years ago
- High performance Cross-platform Inference-engine, you could run Anakin on x86-cpu,arm, nv-gpu, amd-gpu,bitmain and cambricon devices.☆533Updated 2 years ago
- MNN applications by MNN, JNI exec, RK3399. Support tflite\tensorflow\caffe\onnx models.☆506Updated 5 years ago
- Heterogeneous Run Time version of Caffe. Added heterogeneous capabilities to the Caffe, uses heterogeneous computing infrastructure frame…☆268Updated 6 years ago
- VeriSilicon Tensor Interface Module☆234Updated 3 months ago
- arm neon 相关文档和指令意义☆241Updated 5 years ago
- heterogeneity-aware-lowering-and-optimization☆255Updated last year
- Tengine Convert Tool supports converting multi framworks' models into tmfile that suitable for Tengine-Lite AI framework.☆93Updated 3 years ago
- Bolt is a deep learning library with high performance and heterogeneous flexibility.☆940Updated last week
- Minimal runtime core of Caffe, Forward only, GPU support and Memory efficiency.☆373Updated 4 years ago
- Tengine gemm tutorial, step by step☆13Updated 4 years ago
- caffe model convert to onnx model☆175Updated 2 years ago
- Efficient Sparse-Winograd Convolutional Neural Networks (ICLR 2018)☆190Updated 5 years ago
- MediaTek's TFLite delegate☆45Updated last year
- ☆81Updated 2 years ago
- 利用Mobilenetssd目标检测框架,ncnn前向推理,android项目☆200Updated 6 years ago
- pytorch to caffe by onnx☆376Updated 5 years ago
- Fork of https://source.codeaurora.org/quic/hexagon_nn/nnlib☆57Updated 2 years ago
- 基于ncnn框架搭建win及android端的MTCNN人脸检测工程☆529Updated 6 years ago
- A nnie quantization aware training tool on pytorch.☆239Updated 4 years ago
- A very fast neural network computing framework optimized for mobile platforms.QQ group: 676883532 【验证信息输:绝影】☆268Updated 7 years ago