Benchmarking Neural Network Inference on Mobile Devices
☆386Apr 10, 2023Updated 2 years ago
Alternatives and similar repositories for mobile-ai-bench
Users that are interested in mobile-ai-bench are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Mobile AI Compute Engine Model Zoo☆376Jul 5, 2021Updated 4 years ago
- MACE is a deep learning inference framework optimized for mobile heterogeneous computing platforms.☆5,036Jun 17, 2024Updated last year
- CK-NNTest: collaboratively validating, benchmarking and optimizing neural net operators across platforms, frameworks and datasets☆15Jul 10, 2021Updated 4 years ago
- FeatherCNN is a high performance inference engine for convolutional neural networks.☆1,226Sep 24, 2019Updated 6 years ago
- Quantized Neural Network PACKage - mobile-optimized implementation of quantized neural network operators☆1,548Aug 28, 2019Updated 6 years ago
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click and start building anything your business needs.
- benchmark for embededded-ai deep learning inference engines, such as NCNN / TNN / MNN / TensorFlow Lite etc.☆201Feb 18, 2021Updated 5 years ago
- Generate a quantization parameter file for ncnn framework int8 inference☆518Jul 29, 2020Updated 5 years ago
- An Automatic Model Compression (AutoMC) framework for developing smaller and faster AI applications.☆2,909Mar 31, 2023Updated 3 years ago
- Bolt is a deep learning library with high performance and heterogeneous flexibility.☆957Apr 11, 2025Updated 11 months ago
- MNN: A blazing-fast, lightweight inference engine battle-tested by Alibaba, powering high-performance on-device LLMs and Edge AI.☆14,753Apr 2, 2026Updated last week
- Facebook AI Performance Evaluation Platform☆394Apr 2, 2026Updated last week
- Arm NN ML Software.☆1,301Jan 23, 2026Updated 2 months ago
- A quick view of high-performance convolution neural networks (CNNs) inference engines on mobile devices.☆151Jun 13, 2022Updated 3 years ago
- The Compute Library is a set of computer vision and machine learning functions optimised for both Arm CPUs and GPUs using SIMD technologi…☆3,126Apr 2, 2026Updated last week
- Managed Kubernetes at scale on DigitalOcean • AdDigitalOcean Kubernetes includes the control plane, bandwidth allowance, container registry, automatic updates, and more for free.
- A very fast neural network computing framework optimized for mobile platforms.QQ group: 676883532 【验证信息输:绝影】☆268Jan 4, 2018Updated 8 years ago
- High performance Cross-platform Inference-engine, you could run Anakin on x86-cpu,arm, nv-gpu, amd-gpu,bitmain and cambricon devices.☆537Sep 23, 2022Updated 3 years ago
- dabnn is an accelerated binary neural networks inference framework for mobile platform☆778Nov 12, 2019Updated 6 years ago
- ncnn is a high-performance neural network inference framework optimized for the mobile platform☆23,051Updated this week
- Embedded and mobile deep learning research resources☆765Mar 14, 2023Updated 3 years ago
- Heterogeneous Run Time version of Caffe. Added heterogeneous capabilities to the Caffe, uses heterogeneous computing infrastructure frame…☆269Oct 16, 2018Updated 7 years ago
- PaddlePaddle High Performance Deep Learning Inference Engine for Mobile and Edge (飞桨高性能深度学习端侧推理引擎)☆7,245May 22, 2025Updated 10 months ago
- 基于ncnn的face-landmark☆69Aug 20, 2017Updated 8 years ago
- Tengine is a lite, high performance, modular inference engine for embedded device☆4,515Mar 6, 2025Updated last year
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click and start building anything your business needs.
- Daquexian's NNAPI Library. ONNX + Android NNAPI☆350Feb 20, 2020Updated 6 years ago
- ncnn is a high-performance neural network inference framework optimized for the mobile platform☆14May 20, 2022Updated 3 years ago
- WeChat: NeuralTalk,Weekly report and awesome list of embedded-ai.☆382Jul 1, 2022Updated 3 years ago
- MMdnn is a set of tools to help users inter-operate among different deep learning frameworks. E.g. model conversion and visualization. Co…☆5,812Aug 7, 2025Updated 8 months ago
- Channel Pruning for Accelerating Very Deep Neural Networks (ICCV'17)☆1,089May 2, 2024Updated last year
- 基于opengles的神经网络前向传播框架☆19Nov 23, 2018Updated 7 years ago
- The benchmark of ncnn that is a high-performance neural network inference framework optimized for the mobile platform☆72Mar 8, 2019Updated 7 years ago
- Porting caffe to android platform☆507Dec 11, 2018Updated 7 years ago
- MTCNN Face Detection & Alignment☆203Sep 8, 2017Updated 8 years ago
- Wordpress hosting with auto-scaling on Cloudways • AdFully Managed hosting built for WordPress-powered businesses that need reliable, auto-scalable hosting. Cloudways SafeUpdates now available.
- TNN: developed by Tencent Youtu Lab and Guangying Lab, a uniform deep learning inference framework for mobile、desktop and server. TNN is …☆4,631May 9, 2025Updated 11 months ago
- Minimal runtime core of Caffe, Forward only, GPU support and Memory efficiency.☆375Jul 15, 2020Updated 5 years ago
- Caffe Implementation of Google's MobileNets (v1 and v2)☆1,273Jun 8, 2021Updated 4 years ago
- Page for the CVPR 2023 Tutorial - Efficient Neural Networks: From Algorithm Design to Practical Mobile Deployments☆12Jun 30, 2023Updated 2 years ago
- Mobile vision models and code☆920Feb 11, 2026Updated last month
- mobilenet ssd @ ncnn☆72Oct 5, 2017Updated 8 years ago
- Pytorch model to caffe & ncnn☆394Jun 27, 2018Updated 7 years ago