High-efficiency floating-point neural network inference operators for mobile, server, and Web
☆2,326Apr 28, 2026Updated this week
Alternatives and similar repositories for XNNPACK
Users that are interested in XNNPACK are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- ☆322Feb 17, 2026Updated 2 months ago
- Quantized Neural Network PACKage - mobile-optimized implementation of quantized neural network operators☆1,548Aug 28, 2019Updated 6 years ago
- Low-precision matrix multiplication☆1,841Jan 29, 2024Updated 2 years ago
- Open Machine Learning Compiler Framework☆13,304Updated this week
- The Compute Library is a set of computer vision and machine learning functions optimised for both Arm CPUs and GPUs using SIMD technologi…☆3,141Updated this week
- Virtual machines for every use case on DigitalOcean • AdGet dependable uptime with 99.99% SLA, simple security tools, and predictable monthly pricing with DigitalOcean's virtual machines, called Droplets.
- FB (Facebook) + GEMM (General Matrix-Matrix Multiplication) - https://code.fb.com/ml-applications/fbgemm/☆1,557Updated this week
- Arm NN ML Software.☆1,301Jan 23, 2026Updated 3 months ago
- oneAPI Deep Neural Network Library (oneDNN)☆3,984Updated this week
- A retargetable MLIR-based machine learning compiler and runtime toolkit.☆3,738Updated this week
- Acceleration package for neural networks on multi-core CPUs☆1,704Jun 11, 2024Updated last year
- Compiler for Neural Network hardware accelerators☆3,326May 11, 2024Updated last year
- A flexible and efficient deep neural network (DNN) compiler that generates high-performance executable from a DNN model description.☆1,000Sep 19, 2024Updated last year
- a language for fast, portable data-parallel computation☆6,521Apr 24, 2026Updated last week
- MNN: A blazing-fast, lightweight inference engine battle-tested by Alibaba, powering high-performance on-device LLMs and Edge AI.☆15,068Updated this week
- Virtual machines for every use case on DigitalOcean • AdGet dependable uptime with 99.99% SLA, simple security tools, and predictable monthly pricing with DigitalOcean's virtual machines, called Droplets.
- ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator☆20,355Updated this week
- Bolt is a deep learning library with high performance and heterogeneous flexibility.☆958Apr 11, 2025Updated last year
- AIMET is a library that provides advanced quantization and compression techniques for trained neural network models.☆2,604Apr 24, 2026Updated last week
- Development repository for the Triton language and compiler☆19,087Updated this week
- ncnn is a high-performance neural network inference framework optimized for the mobile platform☆23,151Apr 22, 2026Updated last week
- Cross-platform, customizable ML solutions for live and streaming media.☆34,982Updated this week
- ☆2,012Jul 29, 2023Updated 2 years ago
- Simplify your onnx model☆4,328Updated this week
- Open standard for machine learning interoperability☆20,746Updated this week
- Deploy on Railway without the complexity - Free Credits Offer • AdConnect your repo and Railway handles the rest with instant previews. Quickly provision container image services, databases, and storage volumes.
- On-device AI across mobile, embedded and edge for PyTorch☆4,547Updated this week
- MACE is a deep learning inference framework optimized for mobile heterogeneous computing platforms.☆5,037Jun 17, 2024Updated last year
- TNN: developed by Tencent Youtu Lab and Guangying Lab, a uniform deep learning inference framework for mobile、desktop and server. TNN is …☆4,631May 9, 2025Updated 11 months ago
- A performant and modular runtime for TensorFlow☆753Sep 4, 2025Updated 7 months ago
- Representation and Reference Lowering of ONNX Models in MLIR Compiler Infrastructure☆1,005Apr 24, 2026Updated last week
- Tengine is a lite, high performance, modular inference engine for embedded device☆4,521Mar 6, 2025Updated last year
- NVIDIA® TensorRT™ is an SDK for high-performance deep learning inference on NVIDIA GPUs. This repository contains the open source compone…☆12,947Apr 13, 2026Updated 2 weeks ago
- CUDA Templates and Python DSLs for High-Performance Linear Algebra☆9,638Updated this week
- AITemplate is a Python framework which renders neural network into high performance CUDA/HIP C++ code. Specialized for FP16 TensorCore (N…☆4,718Apr 9, 2026Updated 3 weeks ago
- Bare Metal GPUs on DigitalOcean Gradient AI • AdPurpose-built for serious AI teams training foundational models, running large-scale inference, and pushing the boundaries of what's possible.
- Conversion to/from half-precision floating point formats☆384Aug 16, 2025Updated 8 months ago
- Visualizer for neural network, deep learning and machine learning models☆32,805Updated this week
- Library for specialized dense and sparse matrix operations, and deep learning primitives.☆949Mar 18, 2026Updated last month
- Performance-portable, length-agnostic SIMD with runtime dispatch☆5,486Updated this week
- PyTorch/TorchScript/FX compiler for NVIDIA GPUs using TensorRT☆2,968Updated this week
- Transformer related optimization, including BERT, GPT☆6,412Mar 27, 2024Updated 2 years ago
- Composable transformations of Python+NumPy programs: differentiate, vectorize, JIT to GPU/TPU, and more☆35,484Updated this week