intel / onnxruntimeLinks
ONNX Runtime: cross-platform, high performance scoring engine for ML models
☆70Updated this week
Alternatives and similar repositories for onnxruntime
Users that are interested in onnxruntime are comparing it to the libraries listed below
Sorting:
- MIVisionX toolkit is a set of comprehensive computer vision and machine intelligence libraries, utilities, and applications bundled into …☆197Updated this week
- Repository for OpenVINO's extra modules☆137Updated 2 weeks ago
- The framework to generate a Dockerfile, build, test, and deploy a docker image with OpenVINO™ toolkit.☆67Updated last month
- Common utilities for ONNX converters☆276Updated last month
- onnxruntime-extensions: A specialized pre- and post- processing library for ONNX Runtime☆409Updated 2 weeks ago
- AI-related samples made available by the DevTech ProViz team☆30Updated last year
- A nvImageCodec library of GPU- and CPU- accelerated codecs featuring a unified interface☆116Updated 2 weeks ago
- oneAPI Deep Neural Network Library (oneDNN)☆20Updated last week
- AMD's graph optimization engine.☆240Updated this week
- OpenVINO Tokenizers extension☆39Updated last week
- The Triton backend that allows running GPU-accelerated data pre-processing pipelines implemented in DALI's python API.☆136Updated 2 weeks ago
- Use OpenCV API to run ONNX model by ONNXRuntime.☆22Updated last week
- Computation using data flow graphs for scalable machine learning☆68Updated this week
- Convert TensorFlow Lite models (*.tflite) to ONNX.☆162Updated last year
- OpenVINO™ integration with TensorFlow☆179Updated last year
- A Toolkit to Help Optimize Onnx Model☆198Updated this week
- Large Language Model Onnx Inference Framework☆36Updated 7 months ago
- C++ Helper Class for Deep Learning Inference Frameworks: TensorFlow Lite, TensorRT, OpenCV, OpenVINO, ncnn, MNN, SNPE, Arm NN, NNabla, ON…☆292Updated 3 years ago
- Tencent NCNN with added CUDA support☆69Updated 4 years ago
- Windows version of NVIDIA's NCCL ('Nickel') for multi-GPU training - please use https://github.com/NVIDIA/nccl for changes.☆61Updated last year
- A scalable inference server for models optimized with OpenVINO™☆752Updated this week
- cudnn_frontend provides a c++ wrapper for the cudnn backend API and samples on how to use it☆606Updated last week
- Generate saved_model, tfjs, tf-trt, EdgeTPU, CoreML, quantized tflite, ONNX, OpenVINO, Myriad Inference Engine blob and .pb from .tflite.…☆269Updated 2 years ago
- Tensorflow Lite external delegate based on TIM-VX☆47Updated 7 months ago
- An easy way to run, test, benchmark and tune OpenCL kernel files☆23Updated 2 years ago
- ☆122Updated last week
- Run Generative AI models with simple C++/Python API and using OpenVINO Runtime☆323Updated this week
- Count number of parameters / MACs / FLOPS for ONNX models.☆94Updated 10 months ago
- TensorFlow Lite C/C++ distribution libraries and headers☆122Updated 6 months ago
- ONNX Script enables developers to naturally author ONNX functions and models using a subset of Python.☆376Updated this week