triton-inference-server / coreLinks
The core library and APIs implementing the Triton Inference Server.
☆160Updated 3 weeks ago
Alternatives and similar repositories for core
Users that are interested in core are comparing it to the libraries listed below
Sorting:
- Common source, scripts and utilities for creating Triton backends.☆365Updated 3 weeks ago
- Common source, scripts and utilities shared across all Triton repositories.☆79Updated last month
- The Triton backend for the ONNX Runtime.☆170Updated this week
- Triton Model Analyzer is a CLI tool to help with better understanding of the compute and memory requirements of the Triton Inference Serv…☆502Updated last week
- Triton Python, C++ and Java client libraries, and GRPC-generated client examples for go, java and scala.☆673Updated 3 weeks ago
- Triton Model Navigator is an inference toolkit designed for optimizing and deploying Deep Learning models with a focus on NVIDIA GPUs.☆214Updated 8 months ago
- ☆131Updated 3 weeks ago
- HierarchicalKV is a part of NVIDIA Merlin and provides hierarchical key-value storage to meet RecSys requirements. The key capability of…☆187Updated 2 months ago
- NVIDIA Inference Xfer Library (NIXL)☆788Updated last week
- The Triton backend for the PyTorch TorchScript models.☆170Updated this week
- KV cache store for distributed LLM inference☆384Updated last month
- Triton backend that enables pre-process, post-processing and other logic to be implemented in Python.☆663Updated 3 weeks ago
- A Fusion Code Generator for NVIDIA GPUs (commonly known as "nvFuser")☆368Updated this week
- The Triton backend for TensorRT.☆82Updated this week
- ☆423Updated last week
- Universal cross-platform tokenizers binding to HF and sentencepiece☆440Updated 5 months ago
- A tensor-aware point-to-point communication primitive for machine learning☆282Updated 3 weeks ago
- The Triton backend for TensorFlow.☆56Updated last month
- ☆413Updated 2 years ago
- AI Accelerator Benchmark focuses on evaluating AI Accelerators from a practical production perspective, including the ease of use and ver…☆289Updated 4 months ago
- MSCCL++: A GPU-driven communication stack for scalable AI applications☆449Updated this week
- TePDist (TEnsor Program DISTributed) is an HLO-level automatic distributed system for DL models.☆99Updated 2 years ago
- Dynolog is a telemetry daemon for performance monitoring and tracing. It exports metrics from different components in the system like the…☆359Updated last week
- ☆322Updated this week
- NCCL Fast Socket is a transport layer plugin to improve NCCL collective communication performance on Google Cloud.☆122Updated 2 years ago
- A model compilation solution for various hardware☆458Updated 4 months ago
- A high-performance framework for training wide-and-deep recommender systems on heterogeneous cluster☆159Updated last year
- The NVIDIA® Tools Extension SDK (NVTX) is a C-based Application Programming Interface (API) for annotating events, code ranges, and resou…☆499Updated this week
- The Triton TensorRT-LLM Backend☆912Updated this week
- Triton CLI is an open source command line interface that enables users to create, deploy, and profile models served by the Triton Inferen…☆73Updated last month