google / XNNPACK
High-efficiency floating-point neural network inference operators for mobile, server, and Web
☆1,812Updated this week
Related projects: ⓘ
- FB (Facebook) + GEMM (General Matrix-Matrix Multiplication) - https://code.fb.com/ml-applications/fbgemm/☆1,169Updated this week
- A retargetable MLIR-based machine learning compiler and runtime toolkit.☆2,559Updated this week
- Low-precision matrix multiplication☆1,772Updated 7 months ago
- The Torch-MLIR project aims to provide first class support from the PyTorch ecosystem to the MLIR ecosystem.☆1,301Updated this week
- oneAPI Deep Neural Network Library (oneDNN)☆3,579Updated this week
- Compiler for Neural Network hardware accelerators☆3,206Updated 4 months ago
- Quantized Neural Network PACKage - mobile-optimized implementation of quantized neural network operators☆1,518Updated 5 years ago
- Reference implementations of MLPerf™ inference benchmarks☆1,188Updated 2 weeks ago
- A performant and modular runtime for TensorFlow☆753Updated last month
- A machine learning compiler for GPUs, CPUs, and ML accelerators☆2,577Updated this week
- Arm NN ML Software. The code here is a read-only mirror of https://review.mlplatform.org/admin/repos/ml/armnn☆1,162Updated this week
- Representation and Reference Lowering of ONNX Models in MLIR Compiler Infrastructure☆742Updated this week
- "Multi-Level Intermediate Representation" Compiler Infrastructure☆1,735Updated 3 years ago
- common in-memory tensor structure☆890Updated last week
- Tensorflow Backend for ONNX☆1,269Updated 5 months ago
- A flexible and efficient deep neural network (DNN) compiler that generates high-performance executable from a DNN model description.☆952Updated this week
- The Compute Library is a set of computer vision and machine learning functions optimised for both Arm CPUs and GPUs using SIMD technologi…☆2,800Updated 3 weeks ago
- Convert TensorFlow, Keras, Tensorflow.js and Tflite models to ONNX☆2,296Updated 2 weeks ago
- Acceleration package for neural networks on multi-core CPUs☆1,671Updated 3 months ago
- nGraph has moved to OpenVINO☆1,355Updated 3 years ago
- The Tensor Algebra SuperOptimizer for Deep Learning☆687Updated last year
- ONNXMLTools enables conversion of models to ONNX☆992Updated 3 months ago
- Actively maintained ONNX Optimizer☆634Updated 6 months ago
- SOTA low-bit LLM quantization (INT8/FP8/INT4/FP4/NF4) & sparsity; leading model compression techniques on TensorFlow, PyTorch, and ONNX R…☆2,146Updated this week
- CUDA Templates for Linear Algebra Subroutines☆5,359Updated this week
- Simplify your onnx model☆3,777Updated 2 weeks ago
- PyTorch/TorchScript/FX compiler for NVIDIA GPUs using TensorRT☆2,499Updated this week
- ☆297Updated 2 months ago
- AIMET is a library that provides advanced quantization and compression techniques for trained neural network models.☆2,088Updated this week
- Dive into Deep Learning Compiler☆640Updated 2 years ago