Triton Python, C++ and Java client libraries, and GRPC-generated client examples for go, java and scala.
☆686Mar 10, 2026Updated last week
Alternatives and similar repositories for client
Users that are interested in client are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- The Triton Inference Server provides an optimized cloud and edge inferencing solution.☆10,446Updated this week
- Triton backend that enables pre-process, post-processing and other logic to be implemented in Python.☆672Updated this week
- Triton Model Analyzer is a CLI tool to help with better understanding of the compute and memory requirements of the Triton Inference Serv…☆507Updated this week
- Common source, scripts and utilities for creating Triton backends.☆369Mar 10, 2026Updated last week
- The Triton backend for TensorRT.☆86Mar 10, 2026Updated last week
- The Triton backend for the ONNX Runtime.☆172Updated this week
- The core library and APIs implementing the Triton Inference Server.☆170Updated this week
- This repository contains tutorials and examples for Triton Inference Server☆823Mar 10, 2026Updated last week
- Triton Model Navigator is an inference toolkit designed for optimizing and deploying Deep Learning models with a focus on NVIDIA GPUs.☆220Feb 3, 2026Updated last month
- ☆334Mar 17, 2026Updated last week
- The Triton TensorRT-LLM Backend☆926Updated this week
- PyTriton is a Flask/FastAPI-like interface that simplifies Triton's deployment in Python environments.☆838Aug 13, 2025Updated 7 months ago
- ☆413Nov 11, 2023Updated 2 years ago
- The Triton backend that allows running GPU-accelerated data pre-processing pipelines implemented in DALI's python API.☆141Mar 16, 2026Updated last week
- Common source, scripts and utilities shared across all Triton repositories.☆79Updated this week
- ☆135Mar 13, 2026Updated last week
- NVIDIA® TensorRT™ is an SDK for high-performance deep learning inference on NVIDIA GPUs. This repository contains the open source compone…☆12,800Mar 9, 2026Updated 2 weeks ago
- The Triton backend for the PyTorch TorchScript models.☆174Mar 16, 2026Updated last week
- Standardized Distributed Generative and Predictive AI Inference Platform for Scalable, Multi-Framework Deployment on Kubernetes☆5,216Updated this week
- TensorRT LLM provides users with an easy-to-use Python API to define Large Language Models (LLMs) and supports state-of-the-art optimizat…☆13,169Updated this week
- This repository deploys YOLOv4 as an optimized TensorRT engine to Triton Inference Server☆283Jun 2, 2022Updated 3 years ago
- PyTorch/TorchScript/FX compiler for NVIDIA GPUs using TensorRT☆2,959Updated this week
- ONNX-TensorRT: TensorRT backend for ONNX☆3,197Feb 3, 2026Updated last month
- A client library in Rust for Nvidia Triton.☆31Aug 3, 2023Updated 2 years ago
- The Triton backend for TensorFlow.☆55Nov 22, 2025Updated 4 months ago
- OpenVINO backend for Triton.☆37Updated this week
- Transformer related optimization, including BERT, GPT☆6,400Mar 27, 2024Updated last year
- Serve, optimize and scale PyTorch models in production☆4,360Aug 6, 2025Updated 7 months ago
- FIL backend for the Triton Inference Server☆89Mar 10, 2026Updated 2 weeks ago
- FastAPI middleware for comparing different ML model serving approaches☆15Jul 5, 2023Updated 2 years ago
- Deploy stable diffusion model with onnx/tenorrt + tritonserver☆126Aug 15, 2023Updated 2 years ago
- ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator☆19,568Mar 17, 2026Updated last week
- An easy to use PyTorch to TensorRT converter☆4,858Aug 17, 2024Updated last year
- 🚀 Accelerate inference and training of 🤗 Transformers, Diffusers, TIMM and Sentence Transformers with easy to use hardware optimization…☆3,332Mar 13, 2026Updated last week
- Sample app code for deploying TAO Toolkit trained models to Triton☆90Aug 29, 2024Updated last year
- C++ application to perform computer vision tasks using Nvidia Triton Server for model inference☆29Updated this week
- Triton backend for managing the model state tensors automatically in sequence batcher☆16Feb 12, 2024Updated 2 years ago
- ☆22Mar 10, 2026Updated last week
- Efficient, scalable and enterprise-grade CPU/GPU inference server for 🤗 Hugging Face transformer models 🚀☆1,687Oct 23, 2024Updated last year