YH-Wu / Triton-Inference-Server-on-KubernetesLinks
☆31Updated 2 years ago
Alternatives and similar repositories for Triton-Inference-Server-on-Kubernetes
Users that are interested in Triton-Inference-Server-on-Kubernetes are comparing it to the libraries listed below
Sorting:
- The Triton backend for TensorRT.☆76Updated 3 weeks ago
- The Triton backend that allows running GPU-accelerated data pre-processing pipelines implemented in DALI's python API.☆133Updated 2 weeks ago
- The Triton backend for the ONNX Runtime.☆148Updated 3 weeks ago
- OpenVINO backend for Triton.☆31Updated 3 weeks ago
- Common source, scripts and utilities shared across all Triton repositories.☆72Updated 3 weeks ago
- Magface Triton Inferece Server Using Tensorrt☆16Updated 3 years ago
- Common source, scripts and utilities for creating Triton backends.☆324Updated 3 weeks ago
- Simple example of FastAPI + Celery + Triton for benchmarking☆64Updated 2 years ago
- Simple example of FastAPI + gRPC AsyncIO + Triton☆65Updated 2 years ago
- ☆35Updated last year
- triton server ensemble model demo☆30Updated 3 years ago
- This repository provides YOLOV5 GPU optimization sample☆103Updated 2 years ago
- Torchserve + TensorRT + Detection☆19Updated 3 years ago
- Triton CLI is an open source command line interface that enables users to create, deploy, and profile models served by the Triton Inferen…☆62Updated 3 weeks ago
- ☆18Updated 3 weeks ago
- Model compression for ONNX☆96Updated 6 months ago
- Set up CI in DL/ cuda/ cudnn/ TensorRT/ onnx2trt/ onnxruntime/ onnxsim/ Pytorch/ Triton-Inference-Server/ Bazel/ Tesseract/ PaddleOCR/ NV…☆43Updated last year
- Triton Model Navigator is an inference toolkit designed for optimizing and deploying Deep Learning models with a focus on NVIDIA GPUs.☆202Updated last month
- Sample app code for deploying TAO Toolkit trained models to Triton☆87Updated 9 months ago
- The Triton backend for the PyTorch TorchScript models.☆150Updated 3 weeks ago
- DeepStream Libraries offer CVCUDA, NvImageCodec, and PyNvVideoCodec modules as Python APIs for seamless integration into custom framewor…☆51Updated 7 months ago
- Plugin for deploying MLflow models to TorchServe☆109Updated 2 years ago
- Easy and Efficient Quantization for Transformers☆198Updated 3 months ago
- This repository deploys YOLOv4 as an optimized TensorRT engine to Triton Inference Server☆285Updated 3 years ago
- Converting weights of Pytorch models to ONNX & TensorRT engines☆49Updated 2 years ago
- ☆261Updated 3 weeks ago
- The Triton backend for TensorFlow.☆51Updated this week
- A Toolkit to Help Optimize Onnx Model☆153Updated this week
- Triton Model Analyzer is a CLI tool to help with better understanding of the compute and memory requirements of the Triton Inference Serv…☆478Updated this week
- How to deploy open source models using DeepStream and Triton Inference Server☆79Updated 11 months ago