google / XNNPACKLinks
High-efficiency floating-point neural network inference operators for mobile, server, and Web
☆2,170Updated this week
Alternatives and similar repositories for XNNPACK
Users that are interested in XNNPACK are comparing it to the libraries listed below
Sorting:
- A performant and modular runtime for TensorFlow☆759Updated 2 months ago
- Quantized Neural Network PACKage - mobile-optimized implementation of quantized neural network operators☆1,547Updated 6 years ago
- FB (Facebook) + GEMM (General Matrix-Matrix Multiplication) - https://code.fb.com/ml-applications/fbgemm/☆1,476Updated this week
- Arm NN ML Software.☆1,286Updated last week
- Tensorflow Backend for ONNX☆1,325Updated last year
- ONNXMLTools enables conversion of models to ONNX☆1,124Updated 5 months ago
- A retargetable MLIR-based machine learning compiler and runtime toolkit.☆3,456Updated this week
- Low-precision matrix multiplication☆1,817Updated last year
- ONNX Optimizer☆772Updated 2 weeks ago
- AIMET is a library that provides advanced quantization and compression techniques for trained neural network models.☆2,496Updated this week
- The Torch-MLIR project aims to provide first class support from the PyTorch ecosystem to the MLIR ecosystem.☆1,678Updated this week
- Representation and Reference Lowering of ONNX Models in MLIR Compiler Infrastructure☆933Updated this week
- Compiler for Neural Network hardware accelerators☆3,316Updated last year
- Convert TensorFlow, Keras, Tensorflow.js and Tflite models to ONNX☆2,492Updated 2 months ago
- oneAPI Deep Neural Network Library (oneDNN)☆3,919Updated this week
- Simplify your onnx model☆4,229Updated 2 months ago
- Supporting PyTorch models with the Google AI Edge TFLite runtime.☆828Updated this week
- The Compute Library is a set of computer vision and machine learning functions optimised for both Arm CPUs and GPUs using SIMD technologi…☆3,072Updated this week
- nGraph has moved to OpenVINO☆1,344Updated 5 years ago
- ☆315Updated 3 months ago
- LiteRT, successor to TensorFlow Lite. is Google's On-device framework for high-performance ML & GenAI deployment on edge platforms, via e…☆943Updated this week
- "Multi-Level Intermediate Representation" Compiler Infrastructure☆1,759Updated 4 years ago
- Reference implementations of MLPerf® inference benchmarks☆1,484Updated this week
- onnxruntime-extensions: A specialized pre- and post- processing library for ONNX Runtime☆424Updated this week
- SOTA low-bit LLM quantization (INT8/FP8/MXFP8/INT4/MXFP4/NVFP4) & sparsity; leading model compression techniques on TensorFlow, PyTorch, …☆2,525Updated this week
- TFLite Support is a toolkit that helps users to develop ML and deploy TFLite models onto mobile / ioT devices.☆426Updated this week
- A machine learning compiler for GPUs, CPUs, and ML accelerators☆3,670Updated this week
- A toolkit to optimize ML models for deployment for Keras and TensorFlow, including quantization and pruning.☆1,558Updated this week
- common in-memory tensor structure☆1,098Updated last month
- PyTorch/TorchScript/FX compiler for NVIDIA GPUs using TensorRT☆2,884Updated this week