tensorflow / tensorrtLinks
TensorFlow/TensorRT integration
☆744Updated last year
Alternatives and similar repositories for tensorrt
Users that are interested in tensorrt are comparing it to the libraries listed below
Sorting:
- Tensorflow Backend for ONNX☆1,325Updated last year
- Save, Load Frozen Graph and Run Inference From Frozen Graph in TensorFlow 1.x and 2.x☆304Updated 4 years ago
- A toolkit to optimize ML models for deployment for Keras and TensorFlow, including quantization and pruning.☆1,558Updated this week
- Explore the Capabilities of the TensorRT Platform☆264Updated 4 years ago
- Convert tf.keras/Keras models to ONNX☆380Updated 4 years ago
- TensorFlow models accelerated with NVIDIA TensorRT☆690Updated 4 years ago
- A scalable inference server for models optimized with OpenVINO™☆788Updated this week
- Image classification with NVIDIA TensorRT from TensorFlow models.☆459Updated 5 years ago
- This repository is for my YT video series about optimizing a Tensorflow deep learning model using TensorRT. We demonstrate optimizing LeN…☆300Updated 6 years ago
- Convert TensorFlow, Keras, Tensorflow.js and Tflite models to ONNX☆2,490Updated 2 months ago
- Guide for building custom op for TensorFlow☆383Updated 2 years ago
- Fast and accurate object detection with end-to-end GPU optimization☆900Updated 4 years ago
- A performant and modular runtime for TensorFlow☆759Updated 2 months ago
- Deploy your model with TensorRT quickly.☆765Updated last year
- ONNX Optimizer☆770Updated last week
- Dockerfiles and scripts for ONNX container images☆138Updated 3 years ago
- ONNX-TensorRT: TensorRT backend for ONNX☆3,165Updated 2 months ago
- TVM integration into PyTorch☆455Updated 5 years ago
- Quantized Neural Network PACKage - mobile-optimized implementation of quantized neural network operators☆1,547Updated 6 years ago
- A profiling and performance analysis tool for machine learning☆446Updated this week
- Running object detection on a webcam feed using TensorRT on NVIDIA GPUs in Python.☆227Updated 4 years ago
- Intel® AI Reference Models: contains Intel optimizations for running deep learning workloads on Intel® Xeon® Scalable processors and Inte…☆718Updated last week
- Triton Model Analyzer is a CLI tool to help with better understanding of the compute and memory requirements of the Triton Inference Serv…☆496Updated this week
- ⚡ Useful scripts when using TensorRT☆239Updated 5 years ago
- PyTorch/TorchScript/FX compiler for NVIDIA GPUs using TensorRT☆2,878Updated last week
- ☆371Updated 5 months ago
- Samples for TensorRT/Deepstream for Tesla & Jetson☆1,261Updated 3 weeks ago
- A library for high performance deep learning inference on NVIDIA GPUs.☆556Updated 3 years ago
- Code for: "And the bit goes down: Revisiting the quantization of neural networks"☆631Updated 5 years ago
- Common source, scripts and utilities for creating Triton backends.☆354Updated last week