Low-precision matrix multiplication
☆1,831Jan 29, 2024Updated 2 years ago
Alternatives and similar repositories for gemmlowp
Users that are interested in gemmlowp are comparing it to the libraries listed below
Sorting:
- Quantized Neural Network PACKage - mobile-optimized implementation of quantized neural network operators☆1,546Aug 28, 2019Updated 6 years ago
- Acceleration package for neural networks on multi-core CPUs☆1,701Jun 11, 2024Updated last year
- ☆1,992Jul 29, 2023Updated 2 years ago
- The Compute Library is a set of computer vision and machine learning functions optimised for both Arm CPUs and GPUs using SIMD technologi…☆3,120Updated this week
- Open single and half precision gemm implementations☆398Apr 2, 2023Updated 2 years ago
- Open Machine Learning Compiler Framework☆13,142Updated this week
- FB (Facebook) + GEMM (General Matrix-Matrix Multiplication) - https://code.fb.com/ml-applications/fbgemm/☆1,534Updated this week
- oneAPI Deep Neural Network Library (oneDNN)☆3,956Updated this week
- Compiler for Neural Network hardware accelerators☆3,326May 11, 2024Updated last year
- Winograd minimal convolution algorithm generator for convolutional neural networks.☆627Feb 9, 2026Updated 2 weeks ago
- High-efficiency floating-point neural network inference operators for mobile, server, and Web☆2,263Updated this week
- Ristretto: Quantization and compression of large AI models. Author: Philipp Gysel.☆288Jan 24, 2026Updated last month
- ☆1,655Sep 11, 2018Updated 7 years ago
- ☆321Feb 17, 2026Updated last week
- Easy benchmarking of all publicly accessible implementations of convnets☆2,689Jun 9, 2017Updated 8 years ago
- Training Deep Neural Networks with Weights and Activations Constrained to +1 or -1☆1,062Nov 28, 2018Updated 7 years ago
- a language for fast, portable data-parallel computation☆6,577Updated this week
- MACE is a deep learning inference framework optimized for mobile heterogeneous computing platforms.☆5,032Jun 17, 2024Updated last year
- tutorial to optimize GEMM performance on android☆51Feb 17, 2016Updated 10 years ago
- Intel® Nervana™ reference deep learning framework committed to best performance on all hardware☆3,869Dec 23, 2020Updated 5 years ago
- Generate a quantization parameter file for ncnn framework int8 inference☆518Jul 29, 2020Updated 5 years ago
- An Automatic Model Compression (AutoMC) framework for developing smaller and faster AI applications.☆2,914Mar 31, 2023Updated 2 years ago
- ncnn is a high-performance neural network inference framework optimized for the mobile platform☆22,819Feb 20, 2026Updated last week
- SqueezeNet: AlexNet-level accuracy with 50x fewer parameters☆2,217Jul 9, 2018Updated 7 years ago
- Conversion to/from half-precision floating point formats☆379Aug 16, 2025Updated 6 months ago
- CUDA Templates and Python DSLs for High-Performance Linear Algebra☆9,315Updated this week
- Library for specialized dense and sparse matrix operations, and deep learning primitives.☆938Feb 14, 2026Updated 2 weeks ago
- A flexible and efficient deep neural network (DNN) compiler that generates high-performance executable from a DNN model description.☆1,006Sep 19, 2024Updated last year
- FeatherCNN is a high performance inference engine for convolutional neural networks.☆1,228Sep 24, 2019Updated 6 years ago
- Benchmarking Deep Learning operations on different hardware☆1,102Apr 25, 2021Updated 4 years ago
- ☆404Mar 15, 2019Updated 6 years ago
- MNN is a blazing fast, lightweight deep learning framework, battle-tested by business-critical use cases in Alibaba. Full multimodal LLM …☆14,248Feb 16, 2026Updated last week
- Channel Pruning for Accelerating Very Deep Neural Networks (ICCV'17)☆1,088May 2, 2024Updated last year
- Assembler for NVIDIA Maxwell architecture☆1,059Jan 3, 2023Updated 3 years ago
- An efficient framework for convolutional neural networks☆278Aug 30, 2023Updated 2 years ago
- header only, dependency-free deep learning framework in C++14☆6,017Apr 17, 2022Updated 3 years ago
- ImageNet classification using binary Convolutional Neural Networks☆866Dec 5, 2017Updated 8 years ago
- Matrix Shadow:Lightweight CPU/GPU Matrix and Tensor Template Library in C++/CUDA for (Deep) Machine Learning☆1,121Aug 4, 2019Updated 6 years ago
- An open optimized software library project for the ARM® Architecture☆1,528Dec 9, 2022Updated 3 years ago