Quantized Neural Network PACKage - mobile-optimized implementation of quantized neural network operators
☆1,549Aug 28, 2019Updated 6 years ago
Alternatives and similar repositories for QNNPACK
Users that are interested in QNNPACK are comparing it to the libraries listed below
Sorting:
- Low-precision matrix multiplication☆1,832Jan 29, 2024Updated 2 years ago
- An Automatic Model Compression (AutoMC) framework for developing smaller and faster AI applications.☆2,911Mar 31, 2023Updated 2 years ago
- MACE is a deep learning inference framework optimized for mobile heterogeneous computing platforms.☆5,035Jun 17, 2024Updated last year
- FeatherCNN is a high performance inference engine for convolutional neural networks.☆1,226Sep 24, 2019Updated 6 years ago
- Acceleration package for neural networks on multi-core CPUs☆1,702Jun 11, 2024Updated last year
- Generate a quantization parameter file for ncnn framework int8 inference☆518Jul 29, 2020Updated 5 years ago
- dabnn is an accelerated binary neural networks inference framework for mobile platform☆778Nov 12, 2019Updated 6 years ago
- TVM integration into PyTorch☆456Jan 15, 2020Updated 6 years ago
- High-efficiency floating-point neural network inference operators for mobile, server, and Web☆2,276Updated this week
- Open Machine Learning Compiler Framework☆13,197Updated this week
- The Compute Library is a set of computer vision and machine learning functions optimised for both Arm CPUs and GPUs using SIMD technologi…☆3,122Updated this week
- Compiler for Neural Network hardware accelerators☆3,326May 11, 2024Updated last year
- ncnn is a high-performance neural network inference framework optimized for the mobile platform☆22,908Updated this week
- MMdnn is a set of tools to help users inter-operate among different deep learning frameworks. E.g. model conversion and visualization. Co…☆5,813Aug 7, 2025Updated 7 months ago
- MNN: A blazing-fast, lightweight inference engine battle-tested by Alibaba, powering high-performance on-device LLMs and Edge AI.☆14,618Updated this week
- FB (Facebook) + GEMM (General Matrix-Matrix Multiplication) - https://code.fb.com/ml-applications/fbgemm/☆1,543Updated this week
- Tengine is a lite, high performance, modular inference engine for embedded device☆4,511Mar 6, 2025Updated last year
- Winograd minimal convolution algorithm generator for convolutional neural networks.☆627Feb 9, 2026Updated last month
- Mobile vision models and code☆918Feb 11, 2026Updated last month
- Code for: "And the bit goes down: Revisiting the quantization of neural networks"☆631Nov 9, 2020Updated 5 years ago
- Channel Pruning for Accelerating Very Deep Neural Networks (ICCV'17)☆1,088May 2, 2024Updated last year
- ☆2,001Jul 29, 2023Updated 2 years ago
- [ICLR 2019] ProxylessNAS: Direct Neural Architecture Search on Target Task and Hardware☆1,449Aug 30, 2024Updated last year
- Bolt is a deep learning library with high performance and heterogeneous flexibility.☆956Apr 11, 2025Updated 11 months ago
- TNN: developed by Tencent Youtu Lab and Guangying Lab, a uniform deep learning inference framework for mobile、desktop and server. TNN is …☆4,626May 9, 2025Updated 10 months ago
- [CVPR 2018] Real-Time Rotation-Invariant Face Detection with Progressive Calibration Networks☆1,084May 11, 2023Updated 2 years ago
- Pelee: A Real-Time Object Detection System on Mobile Devices☆886Jan 4, 2019Updated 7 years ago
- Facebook AI Performance Evaluation Platform☆394Updated this week
- Rethinking the Value of Network Pruning (Pytorch) (ICLR 2019)☆1,516Jun 7, 2020Updated 5 years ago
- Benchmarking Neural Network Inference on Mobile Devices☆386Apr 10, 2023Updated 2 years ago
- oneAPI Deep Neural Network Library (oneDNN)☆3,964Updated this week
- A GPU-accelerated library containing highly optimized building blocks and an execution engine for data processing to accelerate deep lear…☆5,647Updated this week
- A PyTorch Extension: Tools for easy mixed precision and distributed training in Pytorch☆8,936Updated this week
- Codes for our paper "CenterNet: Keypoint Triplets for Object Detection" .☆1,886Apr 18, 2022Updated 3 years ago
- NVIDIA® TensorRT™ is an SDK for high-performance deep learning inference on NVIDIA GPUs. This repository contains the open source compone…☆12,800Mar 9, 2026Updated last week
- Arm NN ML Software.☆1,301Jan 23, 2026Updated last month
- PaddlePaddle High Performance Deep Learning Inference Engine for Mobile and Edge (飞桨高性能深度学习端侧推理引擎)☆7,236May 22, 2025Updated 10 months ago
- Simplify your onnx model☆4,309Feb 26, 2026Updated 3 weeks ago
- nGraph has moved to OpenVINO☆1,341Oct 15, 2020Updated 5 years ago