NVIDIA / TensorRTLinks
NVIDIA® TensorRT™ is an SDK for high-performance deep learning inference on NVIDIA GPUs. This repository contains the open source components of TensorRT.
☆11,828Updated this week
Alternatives and similar repositories for TensorRT
Users that are interested in TensorRT are comparing it to the libraries listed below
Sorting:
- PyTorch/TorchScript/FX compiler for NVIDIA GPUs using TensorRT☆2,795Updated this week
- ONNX-TensorRT: TensorRT backend for ONNX☆3,103Updated 3 weeks ago
- An easy to use PyTorch to TensorRT converter☆4,773Updated 10 months ago
- The Triton Inference Server provides an optimized cloud and edge inferencing solution.☆9,443Updated this week
- Simplify your onnx model☆4,114Updated 10 months ago
- Open standard for machine learning interoperability☆19,202Updated this week
- Tutorials for creating and using ONNX models☆3,565Updated 11 months ago
- Transformer related optimization, including BERT, GPT☆6,231Updated last year
- Open deep learning compiler stack for cpu, gpu and specialized accelerators☆12,435Updated this week
- A GPU-accelerated library containing highly optimized building blocks and an execution engine for data processing to accelerate deep lear…