tensorflow / tensorrt
TensorFlow/TensorRT integration
☆736Updated 11 months ago
Related projects ⓘ
Alternatives and complementary repositories for tensorrt
- TensorFlow models accelerated with NVIDIA TensorRT☆684Updated 3 years ago
- Explore the Capabilities of the TensorRT Platform☆260Updated 3 years ago
- Convert tf.keras/Keras models to ONNX☆381Updated 3 years ago
- A scalable inference server for models optimized with OpenVINO™☆675Updated this week
- Image classification with NVIDIA TensorRT from TensorFlow models.☆455Updated 4 years ago
- Tensorflow Backend for ONNX☆1,284Updated 7 months ago
- Save, Load Frozen Graph and Run Inference From Frozen Graph in TensorFlow 1.x and 2.x☆300Updated 3 years ago
- This repository is for my YT video series about optimizing a Tensorflow deep learning model using TensorRT. We demonstrate optimizing LeN…☆301Updated 5 years ago
- ONNX-TensorRT: TensorRT backend for ONNX☆2,953Updated 2 weeks ago
- Actively maintained ONNX Optimizer☆647Updated 8 months ago
- Fast and accurate object detection with end-to-end GPU optimization☆887Updated 3 years ago
- Samples for TensorRT/Deepstream for Tesla & Jetson☆1,141Updated last month
- Deploy your model with TensorRT quickly.☆762Updated 11 months ago
- Convert TensorFlow, Keras, Tensorflow.js and Tflite models to ONNX☆2,327Updated 2 months ago
- A toolkit to optimize ML models for deployment for Keras and TensorFlow, including quantization and pruning.☆1,493Updated this week
- PyTorch/TorchScript/FX compiler for NVIDIA GPUs using TensorRT☆2,597Updated this week
- Guide for building custom op for TensorFlow☆378Updated last year
- TensorRT MODNet, YOLOv4, YOLOv3, SSD, MTCNN, and GoogLeNet☆1,751Updated 3 months ago
- Running object detection on a webcam feed using TensorRT on NVIDIA GPUs in Python.☆211Updated 3 years ago
- Deep neural network library and toolkit to do high performace inference on NVIDIA jetson platforms☆718Updated last year
- TVM integration into PyTorch☆452Updated 4 years ago
- A performant and modular runtime for TensorFlow☆756Updated last month
- Triton Model Analyzer is a CLI tool to help with better understanding of the compute and memory requirements of the Triton Inference Serv…☆433Updated last week
- Dataset, streaming, and file system extensions maintained by TensorFlow SIG-IO☆706Updated this week
- Neural Network Compression Framework for enhanced OpenVINO™ inference☆943Updated this week
- Intel® AI Reference Models: contains Intel optimizations for running deep learning workloads on Intel® Xeon® Scalable processors and Inte…☆683Updated this week
- Sample apps to demonstrate how to deploy models trained with TAO on DeepStream☆377Updated last month
- Common utilities for ONNX converters☆251Updated 5 months ago
- This repository deploys YOLOv4 as an optimized TensorRT engine to Triton Inference Server☆279Updated 2 years ago
- A profiling and performance analysis tool for TensorFlow☆360Updated this week