tensorflow / tensorrt
TensorFlow/TensorRT integration
☆742Updated last year
Alternatives and similar repositories for tensorrt:
Users that are interested in tensorrt are comparing it to the libraries listed below
- Tensorflow Backend for ONNX☆1,301Updated last year
- TensorFlow models accelerated with NVIDIA TensorRT☆687Updated 4 years ago
- Convert tf.keras/Keras models to ONNX☆378Updated 3 years ago
- ONNX-TensorRT: TensorRT backend for ONNX☆3,065Updated 2 months ago
- Save, Load Frozen Graph and Run Inference From Frozen Graph in TensorFlow 1.x and 2.x☆302Updated 4 years ago
- Explore the Capabilities of the TensorRT Platform☆264Updated 3 years ago
- This repository is for my YT video series about optimizing a Tensorflow deep learning model using TensorRT. We demonstrate optimizing LeN…☆301Updated 5 years ago
- A toolkit to optimize ML models for deployment for Keras and TensorFlow, including quantization and pruning.☆1,532Updated 2 months ago
- Image classification with NVIDIA TensorRT from TensorFlow models.☆457Updated 4 years ago
- ONNX Optimizer☆700Updated last week
- PyTorch/TorchScript/FX compiler for NVIDIA GPUs using TensorRT☆2,742Updated this week
- Convert TensorFlow, Keras, Tensorflow.js and Tflite models to ONNX☆2,414Updated 3 months ago
- Deploy your model with TensorRT quickly.☆769Updated last year
- Fast and accurate object detection with end-to-end GPU optimization☆894Updated 3 years ago
- Running object detection on a webcam feed using TensorRT on NVIDIA GPUs in Python.☆221Updated 4 years ago
- TVM integration into PyTorch☆452Updated 5 years ago
- A performant and modular runtime for TensorFlow☆761Updated 3 weeks ago
- Common utilities for ONNX converters☆268Updated 5 months ago
- Guide for building custom op for TensorFlow☆381Updated 2 years ago
- Samples for TensorRT/Deepstream for Tesla & Jetson☆1,196Updated 5 months ago
- Quantized Neural Network PACKage - mobile-optimized implementation of quantized neural network operators☆1,540Updated 5 years ago
- Triton Model Analyzer is a CLI tool to help with better understanding of the compute and memory requirements of the Triton Inference Serv…☆474Updated 2 weeks ago
- ⚡ Useful scripts when using TensorRT☆242Updated 4 years ago
- A scalable inference server for models optimized with OpenVINO™☆723Updated this week
- onnxruntime-extensions: A specialized pre- and post- processing library for ONNX Runtime☆380Updated this week
- Dockerfiles and scripts for ONNX container images☆137Updated 2 years ago
- ONNXMLTools enables conversion of models to ONNX☆1,074Updated 4 months ago
- Triton Python, C++ and Java client libraries, and GRPC-generated client examples for go, java and scala.☆620Updated last week
- An easy to use PyTorch to TensorRT converter☆4,729Updated 8 months ago
- A profiling and performance analysis tool for machine learning☆373Updated this week