intel / onnxruntimeLinks
ONNX Runtime: cross-platform, high performance scoring engine for ML models
☆78Updated this week
Alternatives and similar repositories for onnxruntime
Users that are interested in onnxruntime are comparing it to the libraries listed below
Sorting:
- Repository for OpenVINO's extra modules☆162Updated last week
- MIVisionX toolkit is a set of comprehensive computer vision and machine intelligence libraries, utilities, and applications bundled into …☆208Updated this week
- The framework to generate a Dockerfile, build, test, and deploy a docker image with OpenVINO™ toolkit.☆71Updated last week
- Common utilities for ONNX converters☆293Updated last month
- onnxruntime-extensions: A specialized pre- and post- processing library for ONNX Runtime☆441Updated this week
- AI-related samples made available by the DevTech ProViz team☆33Updated last year
- A nvImageCodec library of GPU- and CPU- accelerated codecs featuring a unified interface☆139Updated 3 weeks ago
- OpenVINO Tokenizers extension☆48Updated last week
- C++ Helper Class for Deep Learning Inference Frameworks: TensorFlow Lite, TensorRT, OpenCV, OpenVINO, ncnn, MNN, SNPE, Arm NN, NNabla, ON…☆298Updated 3 years ago
- Convert TensorFlow Lite models (*.tflite) to ONNX.☆171Updated 2 years ago
- AMD's graph optimization engine.☆273Updated this week
- A toolkit showing GPU's all-round capability in video processing☆193Updated 2 years ago
- Count number of parameters / MACs / FLOPS for ONNX models.☆95Updated last year
- Windows version of NVIDIA's NCCL ('Nickel') for multi-GPU training - please use https://github.com/NVIDIA/nccl for changes.☆61Updated 2 months ago
- Tencent NCNN with added CUDA support☆71Updated 5 years ago
- A scalable inference server for models optimized with OpenVINO™☆823Updated this week
- Examples for using ONNX Runtime for model training.☆361Updated last year
- ONNX Optimizer☆795Updated this week
- The Triton backend that allows running GPU-accelerated data pre-processing pipelines implemented in DALI's python API.☆140Updated last week
- OpenVINO™ integration with TensorFlow☆180Updated last year
- Use OpenCV API to run ONNX model by ONNXRuntime.☆23Updated last week
- ONNX Runtime Inference C++ Example☆257Updated 10 months ago
- ☆42Updated 3 years ago
- OpenVX sample implementation☆148Updated last year
- Large Language Model Onnx Inference Framework☆36Updated 2 months ago
- Run Generative AI models with simple C++/Python API and using OpenVINO Runtime☆428Updated this week
- oneAPI Deep Neural Network Library (oneDNN)☆22Updated 2 weeks ago
- An easy way to run, test, benchmark and tune OpenCL kernel files☆24Updated 2 years ago
- A Toolkit to Help Optimize Large Onnx Model☆163Updated 3 months ago
- 🤗 Optimum Intel: Accelerate inference with Intel optimization tools☆532Updated this week