onnx / onnx-docker
Dockerfiles and scripts for ONNX container images
☆138Updated 2 years ago
Alternatives and similar repositories for onnx-docker:
Users that are interested in onnx-docker are comparing it to the libraries listed below
- Convert tf.keras/Keras models to ONNX☆378Updated 3 years ago
- Scailable ONNX python tools☆96Updated 3 months ago
- Running object detection on a webcam feed using TensorRT on NVIDIA GPUs in Python.☆216Updated 4 years ago
- Explore the Capabilities of the TensorRT Platform☆261Updated 3 years ago
- ☆57Updated 4 years ago
- The Triton backend that allows running GPU-accelerated data pre-processing pipelines implemented in DALI's python API.☆132Updated last week
- Accelerate PyTorch models with ONNX Runtime☆358Updated 5 months ago
- Common utilities for ONNX converters☆259Updated 2 months ago
- deepstream 4.x samples to deploy TLT training models☆85Updated 4 years ago
- Examples for using ONNX Runtime for model training.☆326Updated 3 months ago
- TensorRT and TensorFlow demo/example (python, jupyter notebook)☆82Updated 6 years ago
- Open deep learning compiler stack for cpu, gpu and specialized accelerators☆91Updated last year
- The NNEF Tools repository contains tools to generate and consume NNEF documents☆223Updated this week
- Mish Activation Function for PyTorch☆148Updated 4 years ago
- Serving PyTorch 1.0 Models as a Web Server in C++☆226Updated 5 years ago
- Save, Load Frozen Graph and Run Inference From Frozen Graph in TensorFlow 1.x and 2.x☆301Updated 4 years ago
- TensorFlow/TensorRT integration☆740Updated last year
- This repository contains the results and code for the MLPerf™ Inference v0.5 benchmark.☆55Updated last year
- Easily benchmark machine learning models in PyTorch☆149Updated 10 months ago
- onnxruntime-extensions: A specialized pre- and post- processing library for ONNX Runtime☆360Updated this week
- Productionize machine learning predictions, with ONNX or without☆65Updated last year
- ☆52Updated 4 years ago
- This is a simple demonstration for running Keras model model on Tensorflow with TensorRT integration(TFTRT) or on TensorRT directly with…☆67Updated 6 years ago
- How to run Keras model inference x3 times faster with CPU and Intel OpenVINO☆34Updated 5 years ago
- ☆114Updated 4 years ago
- ☆42Updated 6 years ago
- An example of using DeepStream SDK for redaction☆208Updated 7 months ago
- ChatBot: sample for TensorRT inference with a TF model☆46Updated 6 years ago
- The ONNX Model Zoo is a collection of pre-trained models for state of the art models in deep learning, available in the ONNX format☆37Updated 6 years ago
- Neo-AI-DLR is a common runtime for machine learning models compiled by AWS SageMaker Neo, TVM, or TreeLite.☆493Updated last year