YH-Wu / Triton-Inference-Server-on-Kubernetes
☆30Updated 2 years ago
Alternatives and similar repositories for Triton-Inference-Server-on-Kubernetes:
Users that are interested in Triton-Inference-Server-on-Kubernetes are comparing it to the libraries listed below
- The Triton backend that allows running GPU-accelerated data pre-processing pipelines implemented in DALI's python API.☆132Updated 3 weeks ago
- The Triton backend for TensorRT.☆70Updated 3 weeks ago
- The Triton backend for the ONNX Runtime.☆139Updated this week
- Common source, scripts and utilities shared across all Triton repositories.☆69Updated this week
- Deploy stable diffusion model with onnx/tenorrt + tritonserver☆123Updated last year
- Model compression for ONNX☆86Updated 3 months ago
- Triton Model Analyzer is a CLI tool to help with better understanding of the compute and memory requirements of the Triton Inference Serv…☆460Updated 3 weeks ago
- triton server ensemble model demo☆30Updated 2 years ago
- Triton Model Navigator is an inference toolkit designed for optimizing and deploying Deep Learning models with a focus on NVIDIA GPUs.☆196Updated last month
- Common source, scripts and utilities for creating Triton backends.☆310Updated last month
- Triton CLI is an open source command line interface that enables users to create, deploy, and profile models served by the Triton Inferen…☆59Updated this week
- This repository provides YOLOV5 GPU optimization sample☆103Updated 2 years ago
- Advanced inference pipeline using NVIDIA Triton Inference Server for CRAFT Text detection (Pytorch), included converter from Pytorch -> O…☆32Updated 3 years ago
- The Triton backend for the PyTorch TorchScript models.☆144Updated this week
- ☆18Updated last month
- ☆33Updated last year
- Sample app code for deploying TAO Toolkit trained models to Triton☆86Updated 6 months ago
- ☆233Updated this week
- DeepStream Libraries offer CVCUDA, NvImageCodec, and PyNvVideoCodec modules as Python APIs for seamless integration into custom framewor…☆40Updated 4 months ago
- A high-throughput and memory-efficient inference and serving engine for LLMs☆16Updated 9 months ago
- nvjpeg for python☆98Updated 2 years ago
- The Triton backend for TensorFlow.☆51Updated last month
- This repository provides optical character detection and recognition solution optimized on Nvidia devices.☆71Updated 3 weeks ago
- OpenVINO backend for Triton.☆31Updated this week
- This repository serves as an example of deploying the YOLO models on Triton Server for performance and testing purposes☆56Updated 9 months ago
- This repository deploys YOLOv4 as an optimized TensorRT engine to Triton Inference Server☆281Updated 2 years ago
- Magface Triton Inferece Server Using Tensorrt☆16Updated 3 years ago
- NVIDIA TensorRT Hackathon 2023复赛选题:通义千问Qwen-7B用TensorRT-LLM模型搭建及优化☆41Updated last year
- Set up CI in DL/ cuda/ cudnn/ TensorRT/ onnx2trt/ onnxruntime/ onnxsim/ Pytorch/ Triton-Inference-Server/ Bazel/ Tesseract/ PaddleOCR/ NV…☆43Updated last year
- ☆116Updated 11 months ago