The Compute Library is a set of computer vision and machine learning functions optimised for both Arm CPUs and GPUs using SIMD technologies.
☆3,126Apr 2, 2026Updated last week
Alternatives and similar repositories for ComputeLibrary
Users that are interested in ComputeLibrary are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Arm NN ML Software.☆1,301Jan 23, 2026Updated 2 months ago
- Heterogeneous Run Time version of Caffe. Added heterogeneous capabilities to the Caffe, uses heterogeneous computing infrastructure frame…☆269Oct 16, 2018Updated 7 years ago
- An open optimized software library project for the ARM® Architecture☆1,533Dec 9, 2022Updated 3 years ago
- Low-precision matrix multiplication☆1,838Jan 29, 2024Updated 2 years ago
- MACE is a deep learning inference framework optimized for mobile heterogeneous computing platforms.☆5,036Jun 17, 2024Updated last year
- Managed Kubernetes at scale on DigitalOcean • AdDigitalOcean Kubernetes includes the control plane, bandwidth allowance, container registry, automatic updates, and more for free.
- Tengine is a lite, high performance, modular inference engine for embedded device☆4,515Mar 6, 2025Updated last year
- ncnn is a high-performance neural network inference framework optimized for the mobile platform☆23,051Updated this week
- Open Machine Learning Compiler Framework☆13,252Updated this week
- Quantized Neural Network PACKage - mobile-optimized implementation of quantized neural network operators☆1,548Aug 28, 2019Updated 6 years ago
- C++ image processing and machine learning library with using of SIMD: SSE, AVX, AVX-512, AMX for x86/x64, NEON for ARM.☆2,242Updated this week
- ☆157Feb 19, 2025Updated last year
- oneAPI Deep Neural Network Library (oneDNN)☆3,974Updated this week
- Acceleration package for neural networks on multi-core CPUs☆1,704Jun 11, 2024Updated last year
- FeatherCNN is a high performance inference engine for convolutional neural networks.☆1,226Sep 24, 2019Updated 6 years ago
- Managed Database hosting by DigitalOcean • AdPostgreSQL, MySQL, MongoDB, Kafka, Valkey, and OpenSearch available. Automatically scale up storage and focus on building your apps.
- MNN: A blazing-fast, lightweight inference engine battle-tested by Alibaba, powering high-performance on-device LLMs and Edge AI.☆14,753Updated this week
- a language for fast, portable data-parallel computation☆6,612Updated this week
- High-efficiency floating-point neural network inference operators for mobile, server, and Web☆2,299Updated this week
- High performance Cross-platform Inference-engine, you could run Anakin on x86-cpu,arm, nv-gpu, amd-gpu,bitmain and cambricon devices.☆537Sep 23, 2022Updated 3 years ago
- TNN: developed by Tencent Youtu Lab and Guangying Lab, a uniform deep learning inference framework for mobile、desktop and server. TNN is …☆4,631May 9, 2025Updated 11 months ago
- OpenBLAS is an optimized BLAS library based on GotoBLAS2 1.13 BSD version.☆7,360Updated this week
- The platform independent header allowing to compile any C/C++ code containing ARM NEON intrinsic functions for x86 target systems using S…☆489Oct 23, 2025Updated 5 months ago
- ☆2,005Jul 29, 2023Updated 2 years ago
- PaddlePaddle High Performance Deep Learning Inference Engine for Mobile and Edge (飞桨高性能深度学习端侧推理引擎)☆7,245May 22, 2025Updated 10 months ago
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click and start building anything your business needs.
- Makes ARM NEON documentation accessible (with examples)☆409Apr 13, 2024Updated last year
- Arm Machine Learning tutorials and examples☆484Mar 27, 2026Updated last week
- arm neon 相关文档和指令意义☆247May 21, 2019Updated 6 years ago
- Bolt is a deep learning library with high performance and heterogeneous flexibility.☆957Apr 11, 2025Updated 11 months ago
- Tuned OpenCL BLAS☆1,171Updated this week
- row-major matmul optimization☆713Feb 24, 2026Updated last month
- Optimizing Mobile Deep Learning on ARM GPU with TVM☆183Oct 15, 2018Updated 7 years ago
- Optimized implementations of various library functions for ARM architecture processors☆689Apr 1, 2026Updated last week
- A primitive library for neural network☆1,368Nov 24, 2024Updated last year
- DigitalOcean Gradient AI Platform • AdBuild production-ready AI agents using customizable tools or access multiple LLMs through a single endpoint. Create custom knowledge bases or connect external data.
- CMSIS Version 5 Development Repository☆1,584Sep 3, 2024Updated last year
- Generate a quantization parameter file for ncnn framework int8 inference☆518Jul 29, 2020Updated 5 years ago
- Winograd minimal convolution algorithm generator for convolutional neural networks.☆627Feb 9, 2026Updated 2 months ago
- Open Source Library for GPU-Accelerated Execution of Trained Deep Convolutional Neural Networks on Android☆542Apr 12, 2017Updated 8 years ago
- NVIDIA® TensorRT™ is an SDK for high-performance deep learning inference on NVIDIA GPUs. This repository contains the open source compone…☆12,851Mar 25, 2026Updated 2 weeks ago
- Compiler for Neural Network hardware accelerators☆3,328May 11, 2024Updated last year
- Arm neon optimization practice☆392Dec 22, 2020Updated 5 years ago