ARM-software / armnnLinks
Arm NN ML Software. The code here is a read-only mirror of https://review.mlplatform.org/admin/repos/ml/armnn
☆1,262Updated this week
Alternatives and similar repositories for armnn
Users that are interested in armnn are comparing it to the libraries listed below
Sorting:
- The Compute Library is a set of computer vision and machine learning functions optimised for both Arm CPUs and GPUs using SIMD technologi…☆2,974Updated 2 weeks ago
- Arm Machine Learning tutorials and examples☆460Updated last week
- Low-precision matrix multiplication☆1,803Updated last year
- ☆156Updated 3 months ago
- Quantized Neural Network PACKage - mobile-optimized implementation of quantized neural network operators☆1,541Updated 5 years ago
- VeriSilicon Tensor Interface Module☆234Updated 4 months ago
- CMSIS-NN Library☆275Updated last week
- ☆228Updated 2 years ago
- Open Neural Network Compiler☆523Updated last year
- Embedded and mobile deep learning research resources☆753Updated 2 years ago
- High-efficiency floating-point neural network inference operators for mobile, server, and Web☆2,030Updated this week
- ONNX Optimizer☆715Updated this week
- MLPerf™ Tiny is an ML benchmark suite for extremely low-power systems such as microcontrollers☆407Updated this week
- Winograd minimal convolution algorithm generator for convolutional neural networks.☆618Updated 4 years ago
- benchmark for embededded-ai deep learning inference engines, such as NCNN / TNN / MNN / TensorFlow Lite etc.☆204Updated 4 years ago
- Bolt is a deep learning library with high performance and heterogeneous flexibility.☆947Updated last month
- Representation and Reference Lowering of ONNX Models in MLIR Compiler Infrastructure☆866Updated last week
- A lightweight, portable pure C99 onnx inference engine for embedded devices with hardware acceleration support.☆619Updated 6 months ago
- A parser, editor and profiler tool for ONNX models.☆436Updated this week
- NVDLA SW☆497Updated 4 years ago
- AIMET is a library that provides advanced quantization and compression techniques for trained neural network models.☆2,318Updated this week
- [NeurIPS 2020] MCUNet: Tiny Deep Learning on IoT Devices; [NeurIPS 2021] MCUNetV2: Memory-Efficient Patch-based Inference for Tiny Deep L…☆870Updated 6 months ago
- Benchmarking Neural Network Inference on Mobile Devices☆374Updated 2 years ago
- OpenVX sample implementation☆142Updated last year
- ☆903Updated last year
- [NeurIPS 2020] MCUNet: Tiny Deep Learning on IoT Devices; [NeurIPS 2021] MCUNetV2: Memory-Efficient Patch-based Inference for Tiny Deep L…☆567Updated last year
- Open Neural Network Exchange to C compiler.☆278Updated last month
- A flexible and efficient deep neural network (DNN) compiler that generates high-performance executable from a DNN model description.☆986Updated 8 months ago
- High performance Cross-platform Inference-engine, you could run Anakin on x86-cpu,arm, nv-gpu, amd-gpu,bitmain and cambricon devices.☆533Updated 2 years ago
- FB (Facebook) + GEMM (General Matrix-Matrix Multiplication) - https://code.fb.com/ml-applications/fbgemm/☆1,342Updated this week