YH-Wu / Triton-Inference-Server-on-KubernetesLinks
☆33Updated 3 years ago
Alternatives and similar repositories for Triton-Inference-Server-on-Kubernetes
Users that are interested in Triton-Inference-Server-on-Kubernetes are comparing it to the libraries listed below
Sorting:
- The Triton backend for TensorRT.☆79Updated last week
- The Triton backend that allows running GPU-accelerated data pre-processing pipelines implemented in DALI's python API.☆138Updated last month
- The Triton backend for the ONNX Runtime.☆162Updated last week
- Triton Model Analyzer is a CLI tool to help with better understanding of the compute and memory requirements of the Triton Inference Serv…☆494Updated last week
- Common source, scripts and utilities for creating Triton backends.☆351Updated last week
- Triton Model Navigator is an inference toolkit designed for optimizing and deploying Deep Learning models with a focus on NVIDIA GPUs.☆212Updated 5 months ago
- Simple example of FastAPI + gRPC AsyncIO + Triton☆67Updated 3 years ago
- Triton Python, C++ and Java client libraries, and GRPC-generated client examples for go, java and scala.☆649Updated last week
- Simple example of FastAPI + Celery + Triton for benchmarking☆64Updated 3 years ago
- ☆102Updated last year
- Common source, scripts and utilities shared across all Triton repositories.☆76Updated last week
- ☆302Updated this week
- Triton backend that enables pre-process, post-processing and other logic to be implemented in Python.☆645Updated this week
- Converting weights of Pytorch models to ONNX & TensorRT engines☆50Updated 2 years ago
- This repository provides YOLOV5 GPU optimization sample☆106Updated 2 years ago
- This repository deploys YOLOv4 as an optimized TensorRT engine to Triton Inference Server☆287Updated 3 years ago
- Deploy stable diffusion model with onnx/tenorrt + tritonserver☆127Updated 2 years ago
- Triton CLI is an open source command line interface that enables users to create, deploy, and profile models served by the Triton Inferen…☆70Updated last week
- PyTriton is a Flask/FastAPI-like interface that simplifies Triton's deployment in Python environments.☆823Updated 2 months ago
- This repository provides optical character detection and recognition solution optimized on Nvidia devices.☆81Updated 5 months ago
- Sample app code for deploying TAO Toolkit trained models to Triton☆89Updated last year
- Model compression for ONNX☆97Updated 11 months ago
- Set up CI in DL/ cuda/ cudnn/ TensorRT/ onnx2trt/ onnxruntime/ onnxsim/ Pytorch/ Triton-Inference-Server/ Bazel/ Tesseract/ PaddleOCR/ NV…☆43Updated 2 years ago
- How to deploy open source models using DeepStream and Triton Inference Server☆85Updated last year
- ☆36Updated last year
- The Triton backend for TensorFlow.☆53Updated 4 months ago
- ☆21Updated last week
- DeepStream Libraries offer CVCUDA, NvImageCodec, and PyNvVideoCodec modules as Python APIs for seamless integration into custom framewor…☆70Updated last month
- This repository serves as an example of deploying the YOLO models on Triton Server for performance and testing purposes☆69Updated last year
- TAO Toolkit deep learning networks with PyTorch backend☆104Updated 3 weeks ago