Arm NN ML Software.
☆1,301Jan 23, 2026Updated 3 months ago
Alternatives and similar repositories for armnn
Users that are interested in armnn are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- The Compute Library is a set of computer vision and machine learning functions optimised for both Arm CPUs and GPUs using SIMD technologi…☆3,139Apr 23, 2026Updated last week
- ☆157Feb 19, 2025Updated last year
- Arm Machine Learning tutorials and examples☆485Apr 17, 2026Updated last week
- Tengine is a lite, high performance, modular inference engine for embedded device☆4,521Mar 6, 2025Updated last year
- MACE is a deep learning inference framework optimized for mobile heterogeneous computing platforms.☆5,037Jun 17, 2024Updated last year
- Deploy to Railway using AI coding agents - Free Credits Offer • AdUse Claude Code, Codex, OpenCode, and more. Autonomous software development now has the infrastructure to match with Railway.
- Arm mlperf.org benchmark port☆27Jan 3, 2022Updated 4 years ago
- Heterogeneous Run Time version of Caffe. Added heterogeneous capabilities to the Caffe, uses heterogeneous computing infrastructure frame…☆269Oct 16, 2018Updated 7 years ago
- Open Machine Learning Compiler Framework☆13,304Updated this week
- Quantized Neural Network PACKage - mobile-optimized implementation of quantized neural network operators☆1,548Aug 28, 2019Updated 6 years ago
- High-efficiency floating-point neural network inference operators for mobile, server, and Web☆2,326Updated this week
- MNN: A blazing-fast, lightweight inference engine battle-tested by Alibaba, powering high-performance on-device LLMs and Edge AI.☆15,009Updated this week
- CMSIS Version 5 Development Repository☆1,586Sep 3, 2024Updated last year
- Driver stack (including user space libraries, kernel module and firmware) for the Arm® Ethos™-N NPU☆69Apr 1, 2025Updated last year
- ncnn is a high-performance neural network inference framework optimized for the mobile platform☆23,151Apr 22, 2026Updated last week
- GPU virtual machines on DigitalOcean Gradient AI • AdGet to production fast with high-performance AMD and NVIDIA GPUs you can spin up in seconds. The definition of operational simplicity.
- Low-precision matrix multiplication☆1,841Jan 29, 2024Updated 2 years ago
- Bolt is a deep learning library with high performance and heterogeneous flexibility.☆958Apr 11, 2025Updated last year
- Keyword spotting on Arm Cortex-M Microcontrollers☆1,235Apr 10, 2019Updated 7 years ago
- FeatherCNN is a high performance inference engine for convolutional neural networks.☆1,227Sep 24, 2019Updated 6 years ago
- VeriSilicon Tensor Interface Module☆252Mar 30, 2026Updated last month
- Optimizing Mobile Deep Learning on ARM GPU with TVM☆183Oct 15, 2018Updated 7 years ago
- TNN: developed by Tencent Youtu Lab and Guangying Lab, a uniform deep learning inference framework for mobile、desktop and server. TNN is …☆4,631May 9, 2025Updated 11 months ago
- Compiler for Neural Network hardware accelerators☆3,326May 11, 2024Updated last year
- TinyML AI inference library☆1,923May 10, 2025Updated 11 months ago
- Deploy to Railway using AI coding agents - Free Credits Offer • AdUse Claude Code, Codex, OpenCode, and more. Autonomous software development now has the infrastructure to match with Railway.
- CMSIS-NN Library☆387Apr 20, 2026Updated last week
- An open optimized software library project for the ARM® Architecture☆1,534Dec 9, 2022Updated 3 years ago
- oneAPI Deep Neural Network Library (oneDNN)☆3,984Updated this week
- Benchmarking Neural Network Inference on Mobile Devices☆388Apr 10, 2023Updated 3 years ago
- Generate a quantization parameter file for ncnn framework int8 inference☆517Jul 29, 2020Updated 5 years ago
- Open Neural Network Compiler☆530Aug 22, 2023Updated 2 years ago
- Simplify your onnx model☆4,328Updated this week
- A flexible and efficient deep neural network (DNN) compiler that generates high-performance executable from a DNN model description.☆1,000Sep 19, 2024Updated last year
- Daquexian's NNAPI Library. ONNX + Android NNAPI☆350Feb 20, 2020Updated 6 years ago
- Deploy to Railway using AI coding agents - Free Credits Offer • AdUse Claude Code, Codex, OpenCode, and more. Autonomous software development now has the infrastructure to match with Railway.
- Representation and Reference Lowering of ONNX Models in MLIR Compiler Infrastructure☆1,005Updated this week
- Scripts to build a wheel and a Docker image containing a complete ML framework stack, including dependencies, for AArch64 CPUs, as well a…☆273Updated this week
- NVIDIA® TensorRT™ is an SDK for high-performance deep learning inference on NVIDIA GPUs. This repository contains the open source compone…☆12,947Apr 13, 2026Updated 2 weeks ago
- dabnn is an accelerated binary neural networks inference framework for mobile platform☆777Nov 12, 2019Updated 6 years ago
- PaddlePaddle High Performance Deep Learning Inference Engine for Mobile and Edge (飞桨高性能深度学习端侧推理引擎)☆7,248Updated this week
- Acceleration package for neural networks on multi-core CPUs☆1,704Jun 11, 2024Updated last year
- MMdnn is a set of tools to help users inter-operate among different deep learning frameworks. E.g. model conversion and visualization. Co…☆5,811Aug 7, 2025Updated 8 months ago