tensorflow / tensorrt
TensorFlow/TensorRT integration
☆739Updated last year
Alternatives and similar repositories for tensorrt:
Users that are interested in tensorrt are comparing it to the libraries listed below
- TensorFlow models accelerated with NVIDIA TensorRT☆686Updated 4 years ago
- Explore the Capabilities of the TensorRT Platform☆263Updated 3 years ago
- A toolkit to optimize ML models for deployment for Keras and TensorFlow, including quantization and pruning.☆1,526Updated last month
- Convert tf.keras/Keras models to ONNX☆379Updated 3 years ago
- Tensorflow Backend for ONNX☆1,296Updated 11 months ago
- Image classification with NVIDIA TensorRT from TensorFlow models.☆455Updated 4 years ago
- ONNX-TensorRT: TensorRT backend for ONNX☆3,044Updated 2 weeks ago
- This repository is for my YT video series about optimizing a Tensorflow deep learning model using TensorRT. We demonstrate optimizing LeN…☆302Updated 5 years ago
- A profiling and performance analysis tool for TensorFlow☆369Updated this week
- Save, Load Frozen Graph and Run Inference From Frozen Graph in TensorFlow 1.x and 2.x☆301Updated 4 years ago
- Fast and accurate object detection with end-to-end GPU optimization☆892Updated 3 years ago
- ONNX Optimizer☆681Updated last week
- TVM integration into PyTorch☆452Updated 5 years ago
- Convert TensorFlow, Keras, Tensorflow.js and Tflite models to ONNX☆2,386Updated last month
- PyTorch/TorchScript/FX compiler for NVIDIA GPUs using TensorRT☆2,711Updated this week
- A scalable inference server for models optimized with OpenVINO™☆717Updated this week
- Guide for building custom op for TensorFlow☆378Updated last year
- Deploy your model with TensorRT quickly.☆765Updated last year
- Running object detection on a webcam feed using TensorRT on NVIDIA GPUs in Python.☆219Updated 4 years ago
- ONNXMLTools enables conversion of models to ONNX☆1,056Updated 2 months ago
- A performant and modular runtime for TensorFlow☆759Updated last month
- A benchmark framework for Tensorflow☆1,149Updated last year
- Triton Model Analyzer is a CLI tool to help with better understanding of the compute and memory requirements of the Triton Inference Serv…☆462Updated last week
- Dive into Deep Learning Compiler☆647Updated 2 years ago
- A GPU performance profiling tool for PyTorch models☆505Updated 3 years ago
- Samples for TensorRT/Deepstream for Tesla & Jetson☆1,181Updated 3 months ago
- TensorRT Plugin Autogen Tool☆369Updated last year
- Quantized Neural Network PACKage - mobile-optimized implementation of quantized neural network operators☆1,538Updated 5 years ago
- Memory consumption and FLOP count estimates for convnets☆918Updated 6 years ago
- A flexible and efficient deep neural network (DNN) compiler that generates high-performance executable from a DNN model description.☆979Updated 6 months ago