pytorch / TensorRT
PyTorch/TorchScript/FX compiler for NVIDIA GPUs using TensorRT
☆2,677Updated this week
Alternatives and similar repositories for TensorRT:
Users that are interested in TensorRT are comparing it to the libraries listed below
- ONNX-TensorRT: TensorRT backend for ONNX☆3,019Updated this week
- An easy to use PyTorch to TensorRT converter☆4,675Updated 6 months ago
- Simplify your onnx model☆3,976Updated 5 months ago
- NVIDIA® TensorRT™ is an SDK for high-performance deep learning inference on NVIDIA GPUs. This repository contains the open source compone…☆11,184Updated 2 weeks ago
- Actively maintained ONNX Optimizer☆672Updated 3 weeks ago
- CV-CUDA™ is an open-source, GPU accelerated library for cloud-scale image processing and computer vision.☆2,440Updated this week
- A GPU-accelerated library containing highly optimized building blocks and an execution engine for data processing to accelerate deep lear…☆5,278Updated this week
- TensorFlow/TensorRT integration☆740Updated last year
- Transformer related optimization, including BERT, GPT☆6,025Updated 10 months ago
- PyTorch extensions for high performance and large scale training.☆3,260Updated last month
- A tool to modify ONNX models in a visualization fashion, based on Netron and Flask.☆1,420Updated 2 weeks ago
- A library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit floating point (FP8) precision on Hopper and Ada GPUs…☆2,182Updated this week
- A PyTorch Extension: Tools for easy mixed precision and distributed training in Pytorch☆8,537Updated 2 weeks ago
- Convert TensorFlow, Keras, Tensorflow.js and Tflite models to ONNX☆2,368Updated 2 weeks ago
- AIMET is a library that provides advanced quantization and compression techniques for trained neural network models.☆2,221Updated this week
- Simple samples for TensorRT programming☆1,574Updated 2 months ago
- SOTA low-bit LLM quantization (INT8/FP8/INT4/FP4/NF4) & sparsity; leading model compression techniques on TensorFlow, PyTorch, and ONNX R…☆2,325Updated this week
- Deploy your model with TensorRT quickly.☆763Updated last year
- A Python-level JIT compiler designed to make unmodified PyTorch programs faster.☆1,029Updated 10 months ago
- The Triton Inference Server provides an optimized cloud and edge inferencing solution.☆8,752Updated this week
- Enabling PyTorch on XLA Devices (e.g. Google TPU)☆2,529Updated this week
- This repository contains tutorials and examples for Triton Inference Server☆644Updated this week
- TensorRT Model Optimizer is a unified library of state-of-the-art model optimization techniques such as quantization, pruning, distillati…☆708Updated this week
- Tutorials for creating and using ONNX models☆3,451Updated 7 months ago
- Collection of common code that's shared among different research projects in FAIR computer vision team.☆2,077Updated 2 months ago
- A high-performance Python-based I/O system for large (and small) deep learning problems, with strong support for PyTorch.☆2,463Updated last week
- C++ extensions in PyTorch☆1,060Updated 3 weeks ago
- Triton Python, C++ and Java client libraries, and GRPC-generated client examples for go, java and scala.☆600Updated last week
- Neural Network Compression Framework for enhanced OpenVINO™ inference☆974Updated this week
- Tensorflow Backend for ONNX☆1,295Updated 10 months ago