The core library and APIs implementing the Triton Inference Server.
☆170Mar 18, 2026Updated this week
Alternatives and similar repositories for core
Users that are interested in core are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Common source, scripts and utilities for creating Triton backends.☆369Mar 10, 2026Updated last week
- The Triton backend for the ONNX Runtime.☆172Updated this week
- Common source, scripts and utilities shared across all Triton repositories.☆79Updated this week
- Triton Python, C++ and Java client libraries, and GRPC-generated client examples for go, java and scala.☆686Mar 10, 2026Updated last week
- Triton backend for managing the model state tensors automatically in sequence batcher☆16Feb 12, 2024Updated 2 years ago
- The Triton Inference Server provides an optimized cloud and edge inferencing solution.☆10,446Updated this week
- Triton Model Analyzer is a CLI tool to help with better understanding of the compute and memory requirements of the Triton Inference Serv…☆507Updated this week
- The Triton backend for TensorFlow.☆55Nov 22, 2025Updated 4 months ago
- Triton backend that enables pre-process, post-processing and other logic to be implemented in Python.☆672Updated this week
- The Triton TensorRT-LLM Backend☆926Updated this week
- ☆334Mar 17, 2026Updated last week
- Triton Model Navigator is an inference toolkit designed for optimizing and deploying Deep Learning models with a focus on NVIDIA GPUs.☆220Feb 3, 2026Updated last month
- The Triton backend for the PyTorch TorchScript models.☆174Mar 16, 2026Updated last week
- p2p tool☆18Feb 3, 2015Updated 11 years ago
- This repository contains tutorials and examples for Triton Inference Server☆823Mar 10, 2026Updated last week
- Rust crate for some audio utilities☆27Mar 8, 2025Updated last year
- TRITONCACHE implementation of a Redis cache☆17Mar 13, 2026Updated last week
- The Triton backend that allows running GPU-accelerated data pre-processing pipelines implemented in DALI's python API.☆141Mar 16, 2026Updated last week
- ☆27Sep 1, 2023Updated 2 years ago
- Triton Inference Server Web UI☆20Nov 6, 2023Updated 2 years ago
- ☆24Jun 8, 2025Updated 9 months ago
- HierarchicalKV is a part of NVIDIA Merlin and provides hierarchical key-value storage to meet RecSys requirements. The key capability of…☆197Feb 27, 2026Updated 3 weeks ago
- Triton CLI is an open source command line interface that enables users to create, deploy, and profile models served by the Triton Inferen…☆74Mar 10, 2026Updated last week
- YOLOv8 series model supports the latest TensorRT10.☆15Jul 24, 2024Updated last year
- YOLOv10 series model supports the latest TensorRT10.☆16Jul 24, 2024Updated last year
- Nsight Compute In Docker☆13Dec 21, 2023Updated 2 years ago
- ☆22Mar 10, 2026Updated last week
- ☆36Feb 9, 2024Updated 2 years ago
- CUDA Core Compute Libraries☆2,217Updated this week
- Disaggregated serving system for Large Language Models (LLMs).☆785Apr 6, 2025Updated 11 months ago
- FastAPI middleware for comparing different ML model serving approaches☆15Jul 5, 2023Updated 2 years ago
- 🚀 Collection of libraries used with fms-hf-tuning to accelerate fine-tuning and training of large models.☆13Jan 30, 2026Updated last month
- ☆57Oct 17, 2023Updated 2 years ago
- Transformer related optimization, including BERT, GPT☆6,400Mar 27, 2024Updated last year
- PET: Optimizing Tensor Programs with Partially Equivalent Transformations and Automated Corrections☆125Jun 23, 2022Updated 3 years ago
- NVIDIA Inference Xfer Library (NIXL)☆945Updated this week
- Some tensorflow examples☆18Apr 3, 2018Updated 7 years ago
- NVIDIA® TensorRT™ is an SDK for high-performance deep learning inference on NVIDIA GPUs. This repository contains the open source compone…☆12,800Mar 9, 2026Updated 2 weeks ago
- ☆135Mar 13, 2026Updated last week