pytorch / TensorRTLinks
PyTorch/TorchScript/FX compiler for NVIDIA GPUs using TensorRT
☆2,878Updated this week
Alternatives and similar repositories for TensorRT
Users that are interested in TensorRT are comparing it to the libraries listed below
Sorting:
- ONNX-TensorRT: TensorRT backend for ONNX☆3,165Updated 2 months ago
- An easy to use PyTorch to TensorRT converter☆4,826Updated last year
- Simplify your onnx model☆4,222Updated 2 months ago
- CV-CUDA™ is an open-source, GPU accelerated library for cloud-scale image processing and computer vision.☆2,598Updated 5 months ago
- A tool to modify ONNX models in a visualization fashion, based on Netron and Flask.☆1,572Updated 8 months ago
- ONNX Optimizer☆770Updated last week
- A library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit floating point (FP8) precision on Hopper, Ada and Bla…☆2,883Updated last week
- A unified library of state-of-the-art model optimization techniques like quantization, pruning, distillation, speculative decoding, etc. …☆1,512Updated last week
- NVIDIA® TensorRT™ is an SDK for high-performance deep learning inference on NVIDIA GPUs. This repository contains the open source compone…☆12,347Updated this week
- Simple samples for TensorRT programming☆1,645Updated 5 months ago
- AIMET is a library that provides advanced quantization and compression techniques for trained neural network models.☆2,492Updated this week
- Convert TensorFlow, Keras, Tensorflow.js and Tflite models to ONNX☆2,490Updated 2 months ago
- A GPU-accelerated library containing highly optimized building blocks and an execution engine for data processing to accelerate deep lear…☆5,548Updated this week
- Deploy your model with TensorRT quickly.☆765Updated last year
- PyTorch extensions for high performance and large scale training.☆3,385Updated 6 months ago
- Convert ONNX models to PyTorch.☆709Updated 3 weeks ago
- Neural Network Compression Framework for enhanced OpenVINO™ inference☆1,098Updated this week
- SOTA low-bit LLM quantization (INT8/FP8/MXFP8/INT4/MXFP4/NVFP4) & sparsity; leading model compression techniques on TensorFlow, PyTorch, …☆2,520Updated this week
- OpenMMLab Model Deployment Framework☆3,058Updated last year
- Transformer related optimization, including BERT, GPT☆6,344Updated last year
- TorchBench is a collection of open source benchmarks used to evaluate PyTorch performance.☆993Updated this week
- Triton backend that enables pre-process, post-processing and other logic to be implemented in Python.☆654Updated this week
- Triton Python, C++ and Java client libraries, and GRPC-generated client examples for go, java and scala.☆657Updated this week
- Tensorflow Backend for ONNX☆1,325Updated last year
- TensorFlow/TensorRT integration☆744Updated last year
- The Triton Inference Server provides an optimized cloud and edge inferencing solution.☆10,005Updated this week
- This repository contains tutorials and examples for Triton Inference Server☆797Updated this week
- A Python-level JIT compiler designed to make unmodified PyTorch programs faster.☆1,066Updated last year
- PyTriton is a Flask/FastAPI-like interface that simplifies Triton's deployment in Python environments.☆824Updated 2 months ago
- PPL Quantization Tool (PPQ) is a powerful offline neural network quantization tool.☆1,766Updated last year