triton-inference-server / tensorrt_backendLinks
The Triton backend for TensorRT.
☆84Updated last week
Alternatives and similar repositories for tensorrt_backend
Users that are interested in tensorrt_backend are comparing it to the libraries listed below
Sorting:
- The Triton backend for the ONNX Runtime.☆172Updated 2 weeks ago
- The Triton backend that allows running GPU-accelerated data pre-processing pipelines implemented in DALI's python API.☆140Updated this week
- Common source, scripts and utilities for creating Triton backends.☆366Updated 3 weeks ago
- Triton Model Navigator is an inference toolkit designed for optimizing and deploying Deep Learning models with a focus on NVIDIA GPUs.☆216Updated this week
- Common source, scripts and utilities shared across all Triton repositories.☆79Updated 3 weeks ago
- The Triton backend for the PyTorch TorchScript models.☆172Updated 2 weeks ago
- Triton Model Analyzer is a CLI tool to help with better understanding of the compute and memory requirements of the Triton Inference Serv…☆503Updated this week
- ☆328Updated last week
- Triton Python, C++ and Java client libraries, and GRPC-generated client examples for go, java and scala.☆677Updated last week
- ☆133Updated this week
- Triton backend that enables pre-process, post-processing and other logic to be implemented in Python.☆667Updated this week
- Common utilities for ONNX converters☆293Updated last month
- ☆33Updated 3 years ago
- Triton CLI is an open source command line interface that enables users to create, deploy, and profile models served by the Triton Inferen…☆73Updated 3 weeks ago
- A Toolkit to Help Optimize Large Onnx Model☆163Updated 3 months ago
- OpenVINO backend for Triton.☆37Updated 3 weeks ago
- ONNX Python Examples☆16Updated 3 years ago
- Deploy stable diffusion model with onnx/tenorrt + tritonserver☆126Updated 2 years ago
- ☆36Updated last year
- ☆206Updated 8 months ago
- Serving Inside Pytorch☆170Updated last week
- Easy and Efficient Quantization for Transformers☆204Updated last week
- Dynamic batching library for Deep Learning inference. Tutorials for LLM, GPT scenarios.☆106Updated last year
- ☆125Updated last year
- The Triton backend for TensorFlow.☆56Updated 2 months ago
- Model compression for ONNX☆99Updated last year
- This repository contains tutorials and examples for Triton Inference Server☆815Updated this week
- OpenAI compatible API for TensorRT LLM triton backend☆220Updated last year
- Transformer related optimization, including BERT, GPT☆17Updated 2 years ago
- ☆70Updated 2 years ago