YH-Wu / Triton-Inference-Server-on-KubernetesLinks
☆33Updated 3 years ago
Alternatives and similar repositories for Triton-Inference-Server-on-Kubernetes
Users that are interested in Triton-Inference-Server-on-Kubernetes are comparing it to the libraries listed below
Sorting:
- The Triton backend for TensorRT.☆82Updated last week
- The Triton backend that allows running GPU-accelerated data pre-processing pipelines implemented in DALI's python API.☆139Updated 3 weeks ago
- The Triton backend for the ONNX Runtime.☆170Updated last week
- Common source, scripts and utilities shared across all Triton repositories.☆79Updated last month
- Simple example of FastAPI + gRPC AsyncIO + Triton☆69Updated 3 years ago
- Common source, scripts and utilities for creating Triton backends.☆365Updated 3 weeks ago
- Triton Model Analyzer is a CLI tool to help with better understanding of the compute and memory requirements of the Triton Inference Serv…☆502Updated 2 weeks ago
- How to deploy open source models using DeepStream and Triton Inference Server☆86Updated last year
- ☆107Updated 2 months ago
- This repository provides YOLOV5 GPU optimization sample☆106Updated 3 years ago
- Triton Model Navigator is an inference toolkit designed for optimizing and deploying Deep Learning models with a focus on NVIDIA GPUs.☆214Updated 8 months ago
- This repository provides optical character detection and recognition solution optimized on Nvidia devices.☆87Updated 8 months ago
- Sample app code for deploying TAO Toolkit trained models to Triton☆90Updated last year
- Triton backend that enables pre-process, post-processing and other logic to be implemented in Python.☆664Updated this week
- A project demonstrating how to use nvmetamux to run multiple models in parallel.☆113Updated last year
- Converting weights of Pytorch models to ONNX & TensorRT engines☆50Updated 2 years ago
- This repository deploys YOLOv4 as an optimized TensorRT engine to Triton Inference Server☆286Updated 3 years ago
- This repository serves as an example of deploying the YOLO models on Triton Server for performance and testing purposes☆69Updated 2 months ago
- ☆131Updated 3 weeks ago
- Deploy stable diffusion model with onnx/tenorrt + tritonserver☆126Updated 2 years ago
- A project demonstrating how to make DeepStream docker images.☆92Updated 3 months ago
- DeepStream Libraries offer CVCUDA, NvImageCodec, and PyNvVideoCodec modules as Python APIs for seamless integration into custom framewor…☆70Updated 3 months ago
- Triton CLI is an open source command line interface that enables users to create, deploy, and profile models served by the Triton Inferen…☆73Updated last month
- A DeepStream sample application demonstrating end-to-end retail video analytics for brick-and-mortar retail.☆52Updated 3 years ago
- ☆322Updated this week
- Triton Python, C++ and Java client libraries, and GRPC-generated client examples for go, java and scala.☆673Updated last month
- PyTriton is a Flask/FastAPI-like interface that simplifies Triton's deployment in Python environments.☆833Updated 5 months ago
- ☆36Updated last year
- Magface Triton Inferece Server Using Tensorrt☆18Updated 3 years ago
- NVIDIA DeepStream SDK 8.0 / 7.1 / 7.0 / 6.4 / 6.3 / 6.2 / 6.1.1 / 6.1 / 6.0.1 / 6.0 application for YOLO-Face models☆75Updated 3 months ago