onnx / onnx-dockerLinks
Dockerfiles and scripts for ONNX container images
☆138Updated 3 years ago
Alternatives and similar repositories for onnx-docker
Users that are interested in onnx-docker are comparing it to the libraries listed below
Sorting:
- Convert tf.keras/Keras models to ONNX☆382Updated 4 years ago
- TensorFlow/TensorRT integration☆743Updated 2 years ago
- Accelerate PyTorch models with ONNX Runtime☆367Updated last week
- ☆56Updated 5 years ago
- Common utilities for ONNX converters☆292Updated last month
- Running object detection on a webcam feed using TensorRT on NVIDIA GPUs in Python.☆229Updated 4 years ago
- A scalable inference server for models optimized with OpenVINO™☆820Updated this week
- Explore the Capabilities of the TensorRT Platform☆263Updated 4 years ago
- Neo-AI-DLR is a common runtime for machine learning models compiled by AWS SageMaker Neo, TVM, or TreeLite.☆497Updated 2 years ago
- TensorFlow-nGraph bridge☆136Updated 4 years ago
- The Triton backend that allows running GPU-accelerated data pre-processing pipelines implemented in DALI's python API.☆140Updated last week
- TensorRT and TensorFlow demo/example (python, jupyter notebook)☆81Updated 6 years ago
- Scailable ONNX python tools☆98Updated last year
- This repository contains notebooks that show the usage of TensorFlow Lite for quantizing deep neural networks.☆173Updated 3 years ago
- Save, Load Frozen Graph and Run Inference From Frozen Graph in TensorFlow 1.x and 2.x☆303Updated 5 years ago
- Tutorial for Using Custom Layers with OpenVINO (Intel Deep Learning Toolkit)☆106Updated 6 years ago
- Governance of the Keras API.☆144Updated 2 years ago
- ☆116Updated 5 years ago
- An example of using DeepStream SDK for redaction☆212Updated last year
- Custom implementation of EfficientDet https://arxiv.org/abs/1911.09070☆97Updated 2 years ago
- Convert scikit-learn models and pipelines to ONNX☆610Updated 2 months ago
- deepstream 4.x samples to deploy TLT training models☆85Updated 5 years ago
- OpenVINO™ integration with TensorFlow☆180Updated last year
- How to run Keras model inference x3 times faster with CPU and Intel OpenVINO☆34Updated 6 years ago
- Examples using TensorFlow Lite API to run inference on Coral devices☆186Updated last year
- Serving PyTorch 1.0 Models as a Web Server in C++☆226Updated 6 years ago
- reference implementation to use ONNX Runtime with Azure IoT Edge☆64Updated 5 years ago
- Easily benchmark machine learning models in PyTorch☆150Updated last year
- [DEPRECATED] Amazon Deep Learning's Keras with Apache MXNet support☆288Updated 2 years ago
- This repository contains the results and code for the MLPerf™ Inference v0.5 benchmark.☆55Updated 6 months ago