google / XNNPACK
High-efficiency floating-point neural network inference operators for mobile, server, and Web
☆1,960Updated this week
Alternatives and similar repositories for XNNPACK:
Users that are interested in XNNPACK are comparing it to the libraries listed below
- A retargetable MLIR-based machine learning compiler and runtime toolkit.☆2,996Updated this week
- oneAPI Deep Neural Network Library (oneDNN)☆3,726Updated this week
- Compiler for Neural Network hardware accelerators☆3,273Updated 9 months ago
- Convert TensorFlow, Keras, Tensorflow.js and Tflite models to ONNX☆2,368Updated 2 weeks ago
- Actively maintained ONNX Optimizer☆672Updated 3 weeks ago
- A performant and modular runtime for TensorFlow☆759Updated last week
- The Torch-MLIR project aims to provide first class support from the PyTorch ecosystem to the MLIR ecosystem.☆1,424Updated this week
- Low-precision matrix multiplication☆1,792Updated last year
- A machine learning compiler for GPUs, CPUs, and ML accelerators☆2,963Updated this week
- Representation and Reference Lowering of ONNX Models in MLIR Compiler Infrastructure☆812Updated this week
- Arm NN ML Software. The code here is a read-only mirror of https://review.mlplatform.org/admin/repos/ml/armnn☆1,226Updated this week
- FB (Facebook) + GEMM (General Matrix-Matrix Multiplication) - https://code.fb.com/ml-applications/fbgemm/☆1,260Updated this week
- Quantized Neural Network PACKage - mobile-optimized implementation of quantized neural network operators☆1,535Updated 5 years ago
- The Compute Library is a set of computer vision and machine learning functions optimised for both Arm CPUs and GPUs using SIMD technologi…☆2,913Updated this week
- Tensorflow Backend for ONNX☆1,295Updated 10 months ago
- AIMET is a library that provides advanced quantization and compression techniques for trained neural network models.☆2,221Updated this week
- Examples for using ONNX Runtime for machine learning inferencing.☆1,296Updated 3 weeks ago
- A flexible and efficient deep neural network (DNN) compiler that generates high-performance executable from a DNN model description.☆974Updated 5 months ago
- common in-memory tensor structure☆942Updated last week
- ONNXMLTools enables conversion of models to ONNX☆1,050Updated last month
- nGraph has moved to OpenVINO☆1,349Updated 4 years ago
- "Multi-Level Intermediate Representation" Compiler Infrastructure☆1,737Updated 3 years ago
- Reference implementations of MLPerf™ inference benchmarks☆1,315Updated this week
- Simplify your onnx model☆3,976Updated 5 months ago
- Open deep learning compiler stack for cpu, gpu and specialized accelerators☆12,028Updated this week
- PyTorch/TorchScript/FX compiler for NVIDIA GPUs using TensorRT☆2,677Updated this week
- [ARCHIVED] Cooperative primitives for CUDA C++. See https://github.com/NVIDIA/cccl☆1,719Updated last year
- Collective communications library with various primitives for multi-machine training.☆1,263Updated last week
- Backward compatible ML compute opset inspired by HLO/MHLO☆446Updated this week
- onnxruntime-extensions: A specialized pre- and post- processing library for ONNX Runtime☆360Updated this week