Quantized Neural Network PACKage - mobile-optimized implementation of quantized neural network operators
☆1,546Aug 28, 2019Updated 6 years ago
Alternatives and similar repositories for QNNPACK
Users that are interested in QNNPACK are comparing it to the libraries listed below
Sorting:
- Low-precision matrix multiplication☆1,831Jan 29, 2024Updated 2 years ago
- An Automatic Model Compression (AutoMC) framework for developing smaller and faster AI applications.☆2,914Mar 31, 2023Updated 2 years ago
- MACE is a deep learning inference framework optimized for mobile heterogeneous computing platforms.☆5,032Jun 17, 2024Updated last year
- FeatherCNN is a high performance inference engine for convolutional neural networks.☆1,228Sep 24, 2019Updated 6 years ago
- dabnn is an accelerated binary neural networks inference framework for mobile platform☆778Nov 12, 2019Updated 6 years ago
- Generate a quantization parameter file for ncnn framework int8 inference☆518Jul 29, 2020Updated 5 years ago
- Acceleration package for neural networks on multi-core CPUs☆1,701Jun 11, 2024Updated last year
- TVM integration into PyTorch☆456Jan 15, 2020Updated 6 years ago
- High-efficiency floating-point neural network inference operators for mobile, server, and Web☆2,263Updated this week
- Open Machine Learning Compiler Framework☆13,142Updated this week
- MMdnn is a set of tools to help users inter-operate among different deep learning frameworks. E.g. model conversion and visualization. Co…☆5,817Aug 7, 2025Updated 6 months ago
- Compiler for Neural Network hardware accelerators☆3,326May 11, 2024Updated last year
- Mobile vision models and code☆917Feb 11, 2026Updated 2 weeks ago
- The Compute Library is a set of computer vision and machine learning functions optimised for both Arm CPUs and GPUs using SIMD technologi…☆3,120Updated this week
- FB (Facebook) + GEMM (General Matrix-Matrix Multiplication) - https://code.fb.com/ml-applications/fbgemm/☆1,534Updated this week
- MNN is a blazing fast, lightweight deep learning framework, battle-tested by business-critical use cases in Alibaba. Full multimodal LLM …☆14,276Updated this week
- Channel Pruning for Accelerating Very Deep Neural Networks (ICCV'17)☆1,088May 2, 2024Updated last year
- Tengine is a lite, high performance, modular inference engine for embedded device☆4,506Mar 6, 2025Updated 11 months ago
- ncnn is a high-performance neural network inference framework optimized for the mobile platform☆22,819Feb 20, 2026Updated last week
- Code for: "And the bit goes down: Revisiting the quantization of neural networks"☆631Nov 9, 2020Updated 5 years ago
- Winograd minimal convolution algorithm generator for convolutional neural networks.☆627Feb 9, 2026Updated 3 weeks ago
- [ICLR 2019] ProxylessNAS: Direct Neural Architecture Search on Target Task and Hardware☆1,449Aug 30, 2024Updated last year
- Pelee: A Real-Time Object Detection System on Mobile Devices☆886Jan 4, 2019Updated 7 years ago
- ☆1,992Jul 29, 2023Updated 2 years ago
- [CVPR 2018] Real-Time Rotation-Invariant Face Detection with Progressive Calibration Networks☆1,087May 11, 2023Updated 2 years ago
- Rethinking the Value of Network Pruning (Pytorch) (ICLR 2019)☆1,516Jun 7, 2020Updated 5 years ago
- Bolt is a deep learning library with high performance and heterogeneous flexibility.☆956Apr 11, 2025Updated 10 months ago
- TNN: developed by Tencent Youtu Lab and Guangying Lab, a uniform deep learning inference framework for mobile、desktop and server. TNN is …☆4,619May 9, 2025Updated 9 months ago
- A GPU-accelerated library containing highly optimized building blocks and an execution engine for data processing to accelerate deep lear…☆5,637Updated this week
- Facebook AI Performance Evaluation Platform☆393Feb 20, 2026Updated last week
- Codes for our paper "CenterNet: Keypoint Triplets for Object Detection" .☆1,889Apr 18, 2022Updated 3 years ago
- oneAPI Deep Neural Network Library (oneDNN)☆3,956Updated this week
- A PyTorch Extension: Tools for easy mixed precision and distributed training in Pytorch☆8,926Updated this week
- nGraph has moved to OpenVINO☆1,344Oct 15, 2020Updated 5 years ago
- Benchmarking Neural Network Inference on Mobile Devices☆386Apr 10, 2023Updated 2 years ago
- NVIDIA® TensorRT™ is an SDK for high-performance deep learning inference on NVIDIA GPUs. This repository contains the open source compone…☆12,723Updated this week
- ☆1,510Aug 27, 2020Updated 5 years ago
- Simplify your onnx model☆4,297Updated this week
- Caffe Implementation of Google's MobileNets (v1 and v2)☆1,274Jun 8, 2021Updated 4 years ago