Tencent / FeatherCNNLinks
FeatherCNN is a high performance inference engine for convolutional neural networks.
☆1,220Updated 5 years ago
Alternatives and similar repositories for FeatherCNN
Users that are interested in FeatherCNN are comparing it to the libraries listed below
Sorting:
- High performance Cross-platform Inference-engine, you could run Anakin on x86-cpu,arm, nv-gpu, amd-gpu,bitmain and cambricon devices.☆533Updated 2 years ago
- Quantized Neural Network PACKage - mobile-optimized implementation of quantized neural network operators☆1,543Updated 5 years ago
- Bolt is a deep learning library with high performance and heterogeneous flexibility.☆953Updated 3 months ago
- Mobile AI Compute Engine Model Zoo☆376Updated 4 years ago
- Benchmarking Neural Network Inference on Mobile Devices☆377Updated 2 years ago
- An Automatic Model Compression (AutoMC) framework for developing smaller and faster AI applications.☆2,910Updated 2 years ago
- A very fast neural network computing framework optimized for mobile platforms.QQ group: 676883532 【验证信息输:绝影】☆268Updated 7 years ago
- Caffe Implementation of Google's MobileNets (v1 and v2)☆1,270Updated 4 years ago
- Daquexian's NNAPI Library. ONNX + Android NNAPI☆350Updated 5 years ago
- Minimal runtime core of Caffe, Forward only, GPU support and Memory efficiency.☆373Updated 5 years ago
- Generate a quantization parameter file for ncnn framework int8 inference☆518Updated 5 years ago
- Optimized (for size and speed) Caffe lib for iOS and Android with out-of-the-box demo APP.☆315Updated 6 years ago
- Caffe_Code_Analysis☆418Updated 8 years ago
- This fork of BVLC/Caffe is dedicated to improving performance of this deep learning framework when running on CPU, in particular Intel® X…☆850Updated 2 years ago
- WeChat: NeuralTalk,Weekly report and awesome list of embedded-ai.☆379Updated 3 years ago
- A personal depthwise convolution layer implementation on caffe by liuhao.(only GPU)☆525Updated 4 years ago
- use ncnn in Android and iOS, ncnn is a high-performance neural network inference framework optimized for the mobile platform☆298Updated 11 months ago
- Low-precision matrix multiplication☆1,812Updated last year
- ☆209Updated 7 years ago
- Channel Pruning for Accelerating Very Deep Neural Networks (ICCV'17)☆1,085Updated last year
- TVM integration into PyTorch☆453Updated 5 years ago
- MNN applications by MNN, JNI exec, RK3399. Support tflite\tensorflow\caffe\onnx models.☆507Updated 5 years ago
- Winograd minimal convolution algorithm generator for convolutional neural networks.☆619Updated 4 years ago
- Caffe for Deep Compression☆239Updated 7 years ago
- Pytorch model to caffe & ncnn☆393Updated 7 years ago
- ☆125Updated 7 years ago
- This is a fast caffe implementation of ShuffleNet.☆452Updated 6 years ago
- 基于ncnn框架搭建win及android端的MTCNN人脸检测工程☆528Updated 6 years ago
- Open Source Library for GPU-Accelerated Execution of Trained Deep Convolutional Neural Networks on Android☆540Updated 8 years ago
- MACE is a deep learning inference framework optimized for mobile heterogeneous computing platforms.☆5,017Updated last year