JDAI-CV / DNNLibrary
Daquexian's NNAPI Library. ONNX + Android NNAPI
☆346Updated 4 years ago
Related projects ⓘ
Alternatives and complementary repositories for DNNLibrary
- Benchmarking Neural Network Inference on Mobile Devices☆359Updated last year
- Mobile AI Compute Engine Model Zoo☆371Updated 3 years ago
- benchmark for embededded-ai deep learning inference engines, such as NCNN / TNN / MNN / TensorFlow Lite etc.☆202Updated 3 years ago
- A quick view of high-performance convolution neural networks (CNNs) inference engines on mobile devices.☆150Updated 2 years ago
- [Deprecated] Convert caffemodel to DNNLibrary's daq format☆18Updated 6 years ago
- use ncnn in Android and iOS, ncnn is a high-performance neural network inference framework optimized for the mobile platform☆295Updated 2 months ago
- Generate a quantization parameter file for ncnn framework int8 inference☆521Updated 4 years ago
- Optimizing Mobile Deep Learning on ARM GPU with TVM☆179Updated 6 years ago
- WeChat: NeuralTalk,Weekly report and awesome list of embedded-ai.☆375Updated 2 years ago
- Open Source Library for GPU-Accelerated Execution of Trained Deep Convolutional Neural Networks on Android☆539Updated 7 years ago
- Demonstration of using Caffe2 inside an Android application.☆348Updated 5 years ago
- Heterogeneous Run Time version of Caffe. Added heterogeneous capabilities to the Caffe, uses heterogeneous computing infrastructure frame…☆269Updated 6 years ago
- Optimized (for size and speed) Caffe lib for iOS and Android with out-of-the-box demo APP.☆316Updated 6 years ago
- Light version of convolutional neural network Yolo v3 & v2 for objects detection with a minimum of dependencies (INT8-inference, BIT1-XNO…☆302Updated 5 years ago
- High performance Cross-platform Inference-engine, you could run Anakin on x86-cpu,arm, nv-gpu, amd-gpu,bitmain and cambricon devices.☆532Updated 2 years ago
- TVM integration into PyTorch☆452Updated 4 years ago
- Embedded and Mobile Deployment☆71Updated 6 years ago
- Explore the Capabilities of the TensorRT Platform☆260Updated 3 years ago
- caffe model convert to onnx model☆175Updated last year
- Minimal runtime core of Caffe, Forward only, GPU support and Memory efficiency.☆374Updated 4 years ago
- FeatherCNN is a high performance inference engine for convolutional neural networks.☆1,211Updated 5 years ago
- Pytorch model to caffe & ncnn☆393Updated 6 years ago
- Demonstrate Plugin API for TensorRT2.1☆182Updated 7 years ago
- Use TensorRT API to implement Caffe-SSD, SSD(channel pruning), Mobilenet-SSD☆250Updated 6 years ago
- ☆209Updated 6 years ago
- The squeezenet image classification android example☆154Updated 7 months ago
- This repository contains the results and code for the MLPerf™ Inference v0.5 benchmark.☆55Updated last year
- 利用Mobilenetssd目标检测框架,ncnn前向推理,android项目☆200Updated 5 years ago
- Caffe for Sparse Convolutional Neural Network☆238Updated last year