pytorch / TensorRTLinks
PyTorch/TorchScript/FX compiler for NVIDIA GPUs using TensorRT
☆2,841Updated this week
Alternatives and similar repositories for TensorRT
Users that are interested in TensorRT are comparing it to the libraries listed below
Sorting:
- ONNX-TensorRT: TensorRT backend for ONNX☆3,136Updated 3 weeks ago
- An easy to use PyTorch to TensorRT converter☆4,792Updated last year
- Simplify your onnx model☆4,146Updated 11 months ago
- CV-CUDA™ is an open-source, GPU accelerated library for cloud-scale image processing and computer vision.☆2,564Updated 3 months ago
- NVIDIA® TensorRT™ is an SDK for high-performance deep learning inference on NVIDIA GPUs. This repository contains the open source compone…☆12,039Updated this week
- AIMET is a library that provides advanced quantization and compression techniques for trained neural network models.☆2,427Updated this week
- ONNX Optimizer☆742Updated 2 weeks ago
- Simple samples for TensorRT programming☆1,632Updated 2 months ago
- A library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit floating point (FP8) precision on Hopper, Ada and Bla…☆2,645Updated this week
- A tool to modify ONNX models in a visualization fashion, based on Netron and Flask.☆1,548Updated 5 months ago
- A unified library of state-of-the-art model optimization techniques like quantization, pruning, distillation, speculative decoding, etc. …☆1,120Updated this week
- A GPU-accelerated library containing highly optimized building blocks and an execution engine for data processing to accelerate deep lear…☆5,484Updated last week
- Convert TensorFlow, Keras, Tensorflow.js and Tflite models to ONNX☆2,463Updated last month
- SOTA low-bit LLM quantization (INT8/FP8/INT4/FP4/NF4) & sparsity; leading model compression techniques on TensorFlow, PyTorch, and ONNX R…☆2,472Updated last week
- PyTorch extensions for high performance and large scale training.☆3,361Updated 3 months ago
- TensorFlow/TensorRT integration☆743Updated last year
- OpenMMLab Model Deployment Framework☆3,016Updated 10 months ago
- A Python-level JIT compiler designed to make unmodified PyTorch programs faster.☆1,057Updated last year
- The Triton Inference Server provides an optimized cloud and edge inferencing solution.☆9,644Updated last week
- Deploy your model with TensorRT quickly.☆769Updated last year
- Transformer related optimization, including BERT, GPT☆6,274Updated last year
- Neural Network Compression Framework for enhanced OpenVINO™ inference☆1,074Updated this week
- Triton Python, C++ and Java client libraries, and GRPC-generated client examples for go, java and scala.☆641Updated last week
- A primitive library for neural network☆1,349Updated 8 months ago
- TorchBench is a collection of open source benchmarks used to evaluate PyTorch performance.☆976Updated this week
- Serve, optimize and scale PyTorch models in production☆4,350Updated 2 weeks ago
- Triton backend that enables pre-process, post-processing and other logic to be implemented in Python.☆631Updated last week
- Collection of common code that's shared among different research projects in FAIR computer vision team.☆2,161Updated last month
- Reference implementations of MLPerf™ inference benchmarks☆1,441Updated last week
- PyTriton is a Flask/FastAPI-like interface that simplifies Triton's deployment in Python environments.☆816Updated last week