google / XNNPACKLinks
High-efficiency floating-point neural network inference operators for mobile, server, and Web
☆2,105Updated this week
Alternatives and similar repositories for XNNPACK
Users that are interested in XNNPACK are comparing it to the libraries listed below
Sorting:
- A performant and modular runtime for TensorFlow☆759Updated last week
- Low-precision matrix multiplication☆1,815Updated last year
- FB (Facebook) + GEMM (General Matrix-Matrix Multiplication) - https://code.fb.com/ml-applications/fbgemm/☆1,433Updated this week
- A retargetable MLIR-based machine learning compiler and runtime toolkit.☆3,332Updated this week
- Quantized Neural Network PACKage - mobile-optimized implementation of quantized neural network operators☆1,542Updated 6 years ago
- Compiler for Neural Network hardware accelerators☆3,311Updated last year
- ONNX Optimizer☆752Updated last month
- Tensorflow Backend for ONNX☆1,321Updated last year
- Arm NN ML Software. The code here is a read-only mirror of https://review.mlplatform.org/admin/repos/ml/armnn☆1,281Updated this week
- The Torch-MLIR project aims to provide first class support from the PyTorch ecosystem to the MLIR ecosystem.☆1,627Updated this week
- AIMET is a library that provides advanced quantization and compression techniques for trained neural network models.☆2,443Updated this week
- Representation and Reference Lowering of ONNX Models in MLIR Compiler Infrastructure☆907Updated last week
- "Multi-Level Intermediate Representation" Compiler Infrastructure☆1,753Updated 4 years ago
- ☆313Updated last month
- oneAPI Deep Neural Network Library (oneDNN)☆3,879Updated this week
- ONNXMLTools enables conversion of models to ONNX☆1,111Updated 3 months ago
- A machine learning compiler for GPUs, CPUs, and ML accelerators☆3,472Updated this week
- Reference implementations of MLPerf™ inference benchmarks☆1,451Updated last week
- nGraph has moved to OpenVINO☆1,343Updated 4 years ago
- Convert TensorFlow, Keras, Tensorflow.js and Tflite models to ONNX☆2,469Updated last month
- Bolt is a deep learning library with high performance and heterogeneous flexibility.☆954Updated 5 months ago
- onnxruntime-extensions: A specialized pre- and post- processing library for ONNX Runtime☆409Updated this week
- TFLite Support is a toolkit that helps users to develop ML and deploy TFLite models onto mobile / ioT devices.☆420Updated 3 weeks ago
- Acceleration package for neural networks on multi-core CPUs☆1,699Updated last year
- A toolkit to optimize ML models for deployment for Keras and TensorFlow, including quantization and pruning.☆1,552Updated this week
- A flexible and efficient deep neural network (DNN) compiler that generates high-performance executable from a DNN model description.☆994Updated 11 months ago
- SOTA low-bit LLM quantization (INT8/FP8/INT4/FP4/NF4) & sparsity; leading model compression techniques on TensorFlow, PyTorch, and ONNX R…☆2,491Updated this week
- PyTorch/TorchScript/FX compiler for NVIDIA GPUs using TensorRT☆2,848Updated this week
- LiteRT continues the legacy of TensorFlow Lite as the trusted, high-performance runtime for on-device AI. Now with LiteRT Next, we're exp…☆772Updated this week
- The Tensor Algebra SuperOptimizer for Deep Learning☆731Updated 2 years ago