andravin / wincnnView external linksLinks
Winograd minimal convolution algorithm generator for convolutional neural networks.
☆626Feb 9, 2026Updated last week
Alternatives and similar repositories for wincnn
Users that are interested in wincnn are comparing it to the libraries listed below
Sorting:
- Efficient Sparse-Winograd Convolutional Neural Networks (ICLR 2018)☆193May 7, 2019Updated 6 years ago
- Generate a quantization parameter file for ncnn framework int8 inference☆518Jul 29, 2020Updated 5 years ago
- Acceleration package for neural networks on multi-core CPUs☆1,703Jun 11, 2024Updated last year
- Low-precision matrix multiplication☆1,832Jan 29, 2024Updated 2 years ago
- ☆1,988Jul 29, 2023Updated 2 years ago
- Official implementation of "Searching for Winograd-aware Quantized Networks" (MLSys'20)☆27Oct 3, 2023Updated 2 years ago
- Fast CUDA Kernels for ResNet Inference.☆182May 26, 2019Updated 6 years ago
- Quantized Neural Network PACKage - mobile-optimized implementation of quantized neural network operators☆1,549Aug 28, 2019Updated 6 years ago
- Library for fast image convolution in neural networks on Intel Architecture☆30Jun 25, 2017Updated 8 years ago
- ☆26Dec 1, 2016Updated 9 years ago
- I'm going to use the Winograd’s minimal filtering algorithms to introduce a new class of fast algorithms for convolutional neural networks…☆12Mar 22, 2018Updated 7 years ago
- Open single and half precision gemm implementations☆398Apr 2, 2023Updated 2 years ago
- Greentea LibDNN - a universal convolution implementation supporting CUDA and OpenCL☆137Apr 20, 2017Updated 8 years ago
- symmetric int8 gemm☆67Jun 7, 2020Updated 5 years ago
- FeatherCNN is a high performance inference engine for convolutional neural networks.☆1,228Sep 24, 2019Updated 6 years ago
- Bolt is a deep learning library with high performance and heterogeneous flexibility.☆956Apr 11, 2025Updated 10 months ago
- dabnn is an accelerated binary neural networks inference framework for mobile platform☆778Nov 12, 2019Updated 6 years ago
- row-major matmul optimization☆701Aug 20, 2025Updated 5 months ago
- Assembler for NVIDIA Maxwell architecture☆1,059Jan 3, 2023Updated 3 years ago
- The Compute Library is a set of computer vision and machine learning functions optimised for both Arm CPUs and GPUs using SIMD technologi…☆3,113Feb 6, 2026Updated last week
- An efficient framework for convolutional neural networks☆278Aug 30, 2023Updated 2 years ago
- This is originally a collection of papers on neural network accelerators. Now it's more like my selection of research on deep learning an…☆2,048Nov 8, 2025Updated 3 months ago
- Automatic Schedule Exploration and Optimization Framework for Tensor Computations☆182Apr 25, 2022Updated 3 years ago
- An exploration of log domain "alternative floating point" for hardware ML/AI accelerators.☆400Mar 11, 2023Updated 2 years ago
- Tengine gemm tutorial, step by step☆13Mar 12, 2021Updated 4 years ago
- Easy benchmarking of all publicly accessible implementations of convnets☆2,691Jun 9, 2017Updated 8 years ago
- Open Machine Learning Compiler Framework☆13,117Updated this week
- High Efficiency Convolution Kernel for Maxwell GPU Architecture☆137May 8, 2017Updated 8 years ago
- arm neon 相关文档和指令意义☆247May 21, 2019Updated 6 years ago
- implementation of winograd minimal convolution algorithm on Intel Architecture☆39Dec 4, 2017Updated 8 years ago
- PyTorch implementation of Data Free Quantization Through Weight Equalization and Bias Correction.☆263Oct 3, 2023Updated 2 years ago
- 🔥 (yolov3 yolov4 yolov5 unet ...)A mini pytorch inference framework which inspired from darknet.☆739Apr 23, 2023Updated 2 years ago
- A flexible and efficient deep neural network (DNN) compiler that generates high-performance executable from a DNN model description.☆1,006Sep 19, 2024Updated last year
- BLISlab: A Sandbox for Optimizing GEMM☆555Jun 17, 2021Updated 4 years ago
- arm-neon☆92Aug 2, 2024Updated last year
- ☆1,655Sep 11, 2018Updated 7 years ago
- Intel® Nervana™ reference deep learning framework committed to best performance on all hardware☆3,868Dec 23, 2020Updated 5 years ago
- MACE is a deep learning inference framework optimized for mobile heterogeneous computing platforms.☆5,031Jun 17, 2024Updated last year
- An Automatic Model Compression (AutoMC) framework for developing smaller and faster AI applications.☆2,912Mar 31, 2023Updated 2 years ago