YH-Wu / Triton-Inference-Server-on-Kubernetes
☆30Updated 2 years ago
Alternatives and similar repositories for Triton-Inference-Server-on-Kubernetes:
Users that are interested in Triton-Inference-Server-on-Kubernetes are comparing it to the libraries listed below
- The Triton backend for TensorRT.☆68Updated this week
- The Triton backend for the ONNX Runtime.☆136Updated this week
- The Triton backend that allows running GPU-accelerated data pre-processing pipelines implemented in DALI's python API.☆131Updated this week
- Triton CLI is an open source command line interface that enables users to create, deploy, and profile models served by the Triton Inferen…☆52Updated this week
- Common source, scripts and utilities shared across all Triton repositories.☆65Updated this week
- This repository provides optical character detection and recognition solution optimized on Nvidia devices.☆63Updated last week
- Triton Model Navigator is an inference toolkit designed for optimizing and deploying Deep Learning models with a focus on NVIDIA GPUs.☆193Updated this week
- Sample app code for deploying TAO Toolkit trained models to Triton☆85Updated 4 months ago
- Model compression for ONNX☆80Updated 2 months ago
- A faster implementation of OpenCV-CUDA that uses OpenCV objects, and more!☆47Updated this week
- Demonstration of the use of TensorRT and TRITON☆16Updated 3 years ago
- ☆215Updated this week
- This repository provides YOLOV5 GPU optimization sample☆101Updated 2 years ago
- The Triton backend for the PyTorch TorchScript models.☆139Updated this week
- Common source, scripts and utilities for creating Triton backends.☆305Updated this week
- A tool convert TensorRT engine/plan to a fake onnx☆37Updated 2 years ago
- A high-throughput and memory-efficient inference and serving engine for LLMs☆15Updated 7 months ago
- Triton Model Analyzer is a CLI tool to help with better understanding of the compute and memory requirements of the Triton Inference Serv…☆446Updated this week
- ☆18Updated this week
- Advanced inference pipeline using NVIDIA Triton Inference Server for CRAFT Text detection (Pytorch), included converter from Pytorch -> O…☆32Updated 3 years ago
- ☆36Updated this week
- ☆89Updated 4 months ago
- Plugin for deploying MLflow models to TorchServe☆107Updated last year
- The Triton backend for TensorFlow.☆45Updated this week
- ONNX Python Examples☆16Updated 2 years ago
- Dynamic batching library for Deep Learning inference. Tutorials for LLM, GPT scenarios.☆88Updated 5 months ago
- triton server ensemble model demo☆30Updated 2 years ago
- Set up CI in DL/ cuda/ cudnn/ TensorRT/ onnx2trt/ onnxruntime/ onnxsim/ Pytorch/ Triton-Inference-Server/ Bazel/ Tesseract/ PaddleOCR/ NV…☆43Updated last year
- ☆114Updated 10 months ago
- This repository serves as an example of deploying the YOLO models on Triton Server for performance and testing purposes☆49Updated 7 months ago