ARM-software / ComputeLibraryLinks
The Compute Library is a set of computer vision and machine learning functions optimised for both Arm CPUs and GPUs using SIMD technologies.
☆3,054Updated this week
Alternatives and similar repositories for ComputeLibrary
Users that are interested in ComputeLibrary are comparing it to the libraries listed below
Sorting:
- Arm NN ML Software.☆1,285Updated last week
- An open optimized software library project for the ARM® Architecture☆1,506Updated 2 years ago
- Low-precision matrix multiplication☆1,815Updated last year
- oneAPI Deep Neural Network Library (oneDNN)☆3,894Updated this week
- Quantized Neural Network PACKage - mobile-optimized implementation of quantized neural network operators☆1,546Updated 6 years ago
- C++ image processing and machine learning library with using of SIMD: SSE, AVX, AVX-512, AMX for x86/x64, NEON for ARM.☆2,208Updated last week
- High-efficiency floating-point neural network inference operators for mobile, server, and Web☆2,118Updated this week
- Acceleration package for neural networks on multi-core CPUs☆1,701Updated last year
- Tuned OpenCL BLAS☆1,145Updated last week
- Arm Machine Learning tutorials and examples☆472Updated this week
- Makes ARM NEON documentation accessible (with examples)☆404Updated last year
- ☆1,931Updated 2 years ago
- nGraph has moved to OpenVINO☆1,343Updated 4 years ago
- Embedded and mobile deep learning research resources☆757Updated 2 years ago
- ☆155Updated 7 months ago
- The platform independent header allowing to compile any C/C++ code containing ARM NEON intrinsic functions for x86 target systems using S…☆477Updated 3 weeks ago
- Bolt is a deep learning library with high performance and heterogeneous flexibility.☆952Updated 5 months ago
- A list of ICs and IPs for AI, Machine Learning and Deep Learning.☆1,691Updated last year
- TinyML AI inference library☆1,864Updated 4 months ago
- FeatherCNN is a high performance inference engine for convolutional neural networks.☆1,219Updated 6 years ago
- a software library containing BLAS functions written in OpenCL☆862Updated last year
- FB (Facebook) + GEMM (General Matrix-Matrix Multiplication) - https://code.fb.com/ml-applications/fbgemm/☆1,445Updated this week
- High performance Cross-platform Inference-engine, you could run Anakin on x86-cpu,arm, nv-gpu, amd-gpu,bitmain and cambricon devices.☆534Updated 3 years ago
- Winograd minimal convolution algorithm generator for convolutional neural networks.☆622Updated 4 years ago
- [ARCHIVED] Cooperative primitives for CUDA C++. See https://github.com/NVIDIA/cccl☆1,790Updated last year
- Compute Library for Deep Neural Networks (clDNN)☆576Updated 2 years ago
- Khronos OpenCL-Headers☆737Updated 3 weeks ago
- A lightweight, portable pure C99 onnx inference engine for embedded devices with hardware acceleration support.☆633Updated 2 months ago
- [DEPRECATED] Moved to ROCm/rocm-libraries repo☆1,181Updated this week
- Benchmarking Neural Network Inference on Mobile Devices☆383Updated 2 years ago