google / gemmlowpLinks
Low-precision matrix multiplication
☆1,817Updated last year
Alternatives and similar repositories for gemmlowp
Users that are interested in gemmlowp are comparing it to the libraries listed below
Sorting:
- Quantized Neural Network PACKage - mobile-optimized implementation of quantized neural network operators☆1,546Updated 6 years ago
- Acceleration package for neural networks on multi-core CPUs☆1,700Updated last year
- nGraph has moved to OpenVINO☆1,343Updated 5 years ago
- Winograd minimal convolution algorithm generator for convolutional neural networks.☆623Updated 5 years ago
- ☆1,655Updated 7 years ago
- FB (Facebook) + GEMM (General Matrix-Matrix Multiplication) - https://code.fb.com/ml-applications/fbgemm/☆1,459Updated this week
- Benchmarking Deep Learning operations on different hardware☆1,097Updated 4 years ago
- ☆1,932Updated 2 years ago
- A domain specific language to express machine learning workloads.☆1,760Updated 2 years ago
- Matrix Shadow:Lightweight CPU/GPU Matrix and Tensor Template Library in C++/CUDA for (Deep) Machine Learning☆1,117Updated 6 years ago
- Compiler for Neural Network hardware accelerators☆3,313Updated last year
- The Compute Library is a set of computer vision and machine learning functions optimised for both Arm CPUs and GPUs using SIMD technologi…☆3,065Updated this week
- High performance Cross-platform Inference-engine, you could run Anakin on x86-cpu,arm, nv-gpu, amd-gpu,bitmain and cambricon devices.☆534Updated 3 years ago
- This fork of BVLC/Caffe is dedicated to improving performance of this deep learning framework when running on CPU, in particular Intel® X…☆851Updated 3 years ago
- A performant and modular runtime for TensorFlow☆759Updated last month
- Compute Library for Deep Neural Networks (clDNN)☆576Updated 2 years ago
- Library for specialized dense and sparse matrix operations, and deep learning primitives.☆914Updated 2 weeks ago
- Assembler for NVIDIA Maxwell architecture☆1,044Updated 2 years ago
- oneAPI Deep Neural Network Library (oneDNN)☆3,901Updated this week
- FeatherCNN is a high performance inference engine for convolutional neural networks.☆1,221Updated 6 years ago
- High-efficiency floating-point neural network inference operators for mobile, server, and Web☆2,144Updated this week
- The Tensor Algebra Compiler (taco) computes sparse tensor expressions on CPUs and GPUs☆1,328Updated 6 months ago
- [ARCHIVED] Cooperative primitives for CUDA C++. See https://github.com/NVIDIA/cccl☆1,798Updated 2 years ago
- Source code examples from the Parallel Forall Blog☆1,305Updated last month
- ImageNet classification using binary Convolutional Neural Networks☆867Updated 7 years ago
- Caffe: a fast open framework for deep learning.☆670Updated 2 years ago
- Open single and half precision gemm implementations☆393Updated 2 years ago
- Collective communications library with various primitives for multi-machine training.☆1,364Updated last week
- A flexible and efficient deep neural network (DNN) compiler that generates high-performance executable from a DNN model description.☆995Updated last year
- Embedded and mobile deep learning research resources☆756Updated 2 years ago