google / gemmlowp
Low-precision matrix multiplication
☆1,800Updated last year
Alternatives and similar repositories for gemmlowp:
Users that are interested in gemmlowp are comparing it to the libraries listed below
- Acceleration package for neural networks on multi-core CPUs☆1,687Updated 10 months ago
- Quantized Neural Network PACKage - mobile-optimized implementation of quantized neural network operators☆1,540Updated 5 years ago
- Benchmarking Deep Learning operations on different hardware☆1,083Updated 4 years ago
- FB (Facebook) + GEMM (General Matrix-Matrix Multiplication) - https://code.fb.com/ml-applications/fbgemm/☆1,313Updated this week
- Winograd minimal convolution algorithm generator for convolutional neural networks.☆616Updated 4 years ago
- Assembler for NVIDIA Maxwell architecture☆996Updated 2 years ago
- High-efficiency floating-point neural network inference operators for mobile, server, and Web☆2,007Updated this week
- ☆1,867Updated last year
- A domain specific language to express machine learning workloads.☆1,759Updated 2 years ago
- ☆1,658Updated 6 years ago
- Arm NN ML Software. The code here is a read-only mirror of https://review.mlplatform.org/admin/repos/ml/armnn☆1,253Updated this week
- oneAPI Deep Neural Network Library (oneDNN)☆3,783Updated this week
- The Tensor Algebra SuperOptimizer for Deep Learning☆710Updated 2 years ago
- nGraph has moved to OpenVINO☆1,349Updated 4 years ago
- ImageNet classification using binary Convolutional Neural Networks☆858Updated 7 years ago
- The Compute Library is a set of computer vision and machine learning functions optimised for both Arm CPUs and GPUs using SIMD technologi…☆2,960Updated this week
- [ARCHIVED] Cooperative primitives for CUDA C++. See https://github.com/NVIDIA/cccl☆1,745Updated last year
- The Tensor Algebra Compiler (taco) computes sparse tensor expressions on CPUs and GPUs☆1,293Updated 3 weeks ago
- Compute Library for Deep Neural Networks (clDNN)☆574Updated 2 years ago
- ATen: A TENsor library for C++11☆694Updated 5 years ago
- A flexible and efficient deep neural network (DNN) compiler that generates high-performance executable from a DNN model description.☆982Updated 7 months ago
- Efficient GPU kernels for block-sparse matrix multiplication and convolution☆1,039Updated last year
- High performance Cross-platform Inference-engine, you could run Anakin on x86-cpu,arm, nv-gpu, amd-gpu,bitmain and cambricon devices.☆533Updated 2 years ago
- Library for specialized dense and sparse matrix operations, and deep learning primitives.☆869Updated last week
- Dive into Deep Learning Compiler☆646Updated 2 years ago
- "Multi-Level Intermediate Representation" Compiler Infrastructure☆1,744Updated 4 years ago
- Open single and half precision gemm implementations☆381Updated 2 years ago
- A performant and modular runtime for TensorFlow☆761Updated 2 weeks ago
- Compiler for Neural Network hardware accelerators☆3,289Updated 11 months ago
- common in-memory tensor structure☆983Updated 3 weeks ago