onnx / onnx-tensorrtLinks
ONNX-TensorRT: TensorRT backend for ONNX
☆3,175Updated last month
Alternatives and similar repositories for onnx-tensorrt
Users that are interested in onnx-tensorrt are comparing it to the libraries listed below
Sorting:
- An easy to use PyTorch to TensorRT converter☆4,837Updated last year
- PyTorch/TorchScript/FX compiler for NVIDIA GPUs using TensorRT☆2,907Updated this week
- Simplify your onnx model☆4,248Updated 3 months ago
- Deploy your model with TensorRT quickly.☆764Updated 2 years ago
- Simple samples for TensorRT programming☆1,649Updated last week
- OpenMMLab Model Deployment Framework☆3,086Updated last year
- Convert TensorFlow, Keras, Tensorflow.js and Tflite models to ONNX☆2,500Updated 3 months ago
- A tool to modify ONNX models in a visualization fashion, based on Netron and Flask.☆1,586Updated last month
- NVIDIA® TensorRT™ is an SDK for high-performance deep learning inference on NVIDIA GPUs. This repository contains the open source compone…☆12,473Updated last week
- Implementation of popular deep learning networks with TensorRT network definition API☆7,602Updated this week
- TensorRT MODNet, YOLOv4, YOLOv3, SSD, MTCNN, and GoogLeNet☆1,785Updated 3 months ago
- TensorRT8.Support Yolov5n,s,m,l,x .darknet -> tensorrt. Yolov4 Yolov3 use raw darknet *.weights and *.cfg fils. If the wrapper is usef…☆1,200Updated 2 years ago
- TensorFlow/TensorRT integration☆744Updated 2 years ago
- CV-CUDA™ is an open-source, GPU accelerated library for cloud-scale image processing and computer vision.☆2,620Updated last month
- ONNX Optimizer☆779Updated last month
- Tensorflow Backend for ONNX☆1,327Updated last year
- ☆1,043Updated last year
- ☆721Updated 2 years ago
- A primitive library for neural network☆1,369Updated last year
- yolort is a runtime stack for yolov5 on specialized accelerators such as tensorrt, libtorch, onnxruntime, tvm and ncnn.☆731Updated last month
- Fast and accurate object detection with end-to-end GPU optimization☆901Updated 4 years ago
- Samples for TensorRT/Deepstream for Tesla & Jetson☆1,274Updated 2 months ago
- PPL Quantization Tool (PPQ) is a powerful offline neural network quantization tool.☆1,774Updated last year
- convert mmdetection model to tensorrt, support fp16, int8, batch input, dynamic shape etc.☆598Updated last year
- C++ library based on tensorrt integration☆2,837Updated 2 years ago
- Deep neural network library and toolkit to do high performace inference on NVIDIA jetson platforms☆720Updated 2 years ago
- Tutorials for creating and using ONNX models☆3,635Updated last year
- Scaled-YOLOv4: Scaling Cross Stage Partial Network☆2,030Updated last year
- DeepStream SDK Python bindings and sample applications☆1,742Updated 2 months ago
- Neural Network Compression Framework for enhanced OpenVINO™ inference☆1,111Updated this week