Low-precision matrix multiplication
☆1,832Jan 29, 2024Updated 2 years ago
Alternatives and similar repositories for gemmlowp
Users that are interested in gemmlowp are comparing it to the libraries listed below
Sorting:
- Quantized Neural Network PACKage - mobile-optimized implementation of quantized neural network operators☆1,549Aug 28, 2019Updated 6 years ago
- Acceleration package for neural networks on multi-core CPUs☆1,702Jun 11, 2024Updated last year
- ☆1,995Jul 29, 2023Updated 2 years ago
- The Compute Library is a set of computer vision and machine learning functions optimised for both Arm CPUs and GPUs using SIMD technologi…☆3,122Updated this week
- Open single and half precision gemm implementations☆397Apr 2, 2023Updated 2 years ago
- Open Machine Learning Compiler Framework☆13,197Updated this week
- Winograd minimal convolution algorithm generator for convolutional neural networks.☆627Feb 9, 2026Updated last month
- FB (Facebook) + GEMM (General Matrix-Matrix Multiplication) - https://code.fb.com/ml-applications/fbgemm/☆1,543Updated this week
- Ristretto: Quantization and compression of large AI models. Author: Philipp Gysel.☆288Jan 24, 2026Updated last month
- oneAPI Deep Neural Network Library (oneDNN)☆3,964Updated this week
- High-efficiency floating-point neural network inference operators for mobile, server, and Web☆2,276Updated this week
- ☆321Feb 17, 2026Updated last month
- Compiler for Neural Network hardware accelerators☆3,326May 11, 2024Updated last year
- ☆1,653Sep 11, 2018Updated 7 years ago
- tutorial to optimize GEMM performance on android☆51Feb 17, 2016Updated 10 years ago
- Easy benchmarking of all publicly accessible implementations of convnets☆2,688Jun 9, 2017Updated 8 years ago
- a language for fast, portable data-parallel computation☆6,601Updated this week
- Training Deep Neural Networks with Weights and Activations Constrained to +1 or -1☆1,065Nov 28, 2018Updated 7 years ago
- MACE is a deep learning inference framework optimized for mobile heterogeneous computing platforms.☆5,035Jun 17, 2024Updated last year
- Generate a quantization parameter file for ncnn framework int8 inference☆518Jul 29, 2020Updated 5 years ago
- ncnn is a high-performance neural network inference framework optimized for the mobile platform☆22,908Updated this week
- ☆404Mar 15, 2019Updated 7 years ago
- An Automatic Model Compression (AutoMC) framework for developing smaller and faster AI applications.☆2,911Mar 31, 2023Updated 2 years ago
- Intel® Nervana™ reference deep learning framework committed to best performance on all hardware☆3,870Dec 23, 2020Updated 5 years ago
- FeatherCNN is a high performance inference engine for convolutional neural networks.☆1,226Sep 24, 2019Updated 6 years ago
- MNN: A blazing-fast, lightweight inference engine battle-tested by Alibaba, powering high-performance on-device LLMs and Edge AI.☆14,533Mar 13, 2026Updated last week
- Assembler for NVIDIA Maxwell architecture☆1,059Jan 3, 2023Updated 3 years ago
- Library for specialized dense and sparse matrix operations, and deep learning primitives.☆945Feb 14, 2026Updated last month
- An efficient framework for convolutional neural networks☆279Aug 30, 2023Updated 2 years ago
- CUDA Templates and Python DSLs for High-Performance Linear Algebra☆9,442Updated this week
- SqueezeNet: AlexNet-level accuracy with 50x fewer parameters☆2,216Jul 9, 2018Updated 7 years ago
- ImageNet classification using binary Convolutional Neural Networks☆867Dec 5, 2017Updated 8 years ago
- Benchmarking Deep Learning operations on different hardware☆1,103Apr 25, 2021Updated 4 years ago
- An open optimized software library project for the ARM® Architecture☆1,530Dec 9, 2022Updated 3 years ago
- Channel Pruning for Accelerating Very Deep Neural Networks (ICCV'17)☆1,088May 2, 2024Updated last year
- Caffe Implementation for Incremental network quantization☆191Jul 29, 2018Updated 7 years ago
- Quantization of Convolutional Neural networks.☆250Aug 5, 2024Updated last year
- A flexible and efficient deep neural network (DNN) compiler that generates high-performance executable from a DNN model description.☆1,003Sep 19, 2024Updated last year
- Conversion to/from half-precision floating point formats☆380Aug 16, 2025Updated 7 months ago