openvinotoolkit / oneDNNLinks
oneAPI Deep Neural Network Library (oneDNN)
☆21Updated last week
Alternatives and similar repositories for oneDNN
Users that are interested in oneDNN are comparing it to the libraries listed below
Sorting:
- AMD's graph optimization engine.☆253Updated this week
- MIVisionX toolkit is a set of comprehensive computer vision and machine intelligence libraries, utilities, and applications bundled into …☆200Updated this week
- ONNX Runtime: cross-platform, high performance scoring engine for ML models☆72Updated this week
- OpenVINO Intel NPU Compiler☆71Updated this week
- Common utilities for ONNX converters☆282Updated last month
- The framework to generate a Dockerfile, build, test, and deploy a docker image with OpenVINO™ toolkit.☆67Updated 3 weeks ago
- OpenVINO Tokenizers extension☆42Updated last week
- ☆127Updated last week
- Inference of quantization aware trained networks using TensorRT☆83Updated 2 years ago
- OpenVX sample implementation☆147Updated last year
- cudnn_frontend provides a c++ wrapper for the cudnn backend API and samples on how to use it☆623Updated 3 weeks ago
- Composable Kernel: Performance Portable Programming Model for Machine Learning Tensor Operators☆470Updated this week
- Repository for OpenVINO's extra modules☆141Updated this week
- [DEPRECATED] Moved to ROCm/rocm-libraries repo☆111Updated this week
- ONNX Script enables developers to naturally author ONNX functions and models using a subset of Python.☆400Updated last week
- AI-related samples made available by the DevTech ProViz team☆30Updated last year
- Model Compression Toolkit (MCT) is an open source project for neural network model optimization under efficient, constrained hardware. Th…☆418Updated this week
- OpenVINO™ integration with TensorFlow☆179Updated last year
- oneAPI Level Zero Specification Headers and Loader☆280Updated last week
- Tensorflow Lite external delegate based on TIM-VX☆49Updated 9 months ago
- With OpenVINO Test Drive, users can run large language models (LLMs) and models trained by Intel Geti on their devices, including AI PCs …☆33Updated 2 weeks ago
- VeriSilicon Tensor Interface Module☆237Updated 9 months ago
- ☆157Updated 3 months ago
- onnxruntime-extensions: A specialized pre- and post- processing library for ONNX Runtime☆414Updated last week
- benchmark for embededded-ai deep learning inference engines, such as NCNN / TNN / MNN / TensorFlow Lite etc.☆204Updated 4 years ago
- Run Generative AI models with simple C++/Python API and using OpenVINO Runtime☆345Updated last week
- 🤗 Optimum Intel: Accelerate inference with Intel optimization tools☆498Updated this week
- edge/mobile transformer based Vision DNN inference benchmark☆16Updated last month
- Representation and Reference Lowering of ONNX Models in MLIR Compiler Infrastructure☆921Updated this week
- ☆271Updated this week