Common source, scripts and utilities for creating Triton backends.
☆369Mar 10, 2026Updated 2 weeks ago
Alternatives and similar repositories for backend
Users that are interested in backend are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- The core library and APIs implementing the Triton Inference Server.☆170Mar 18, 2026Updated last week
- Triton backend that enables pre-process, post-processing and other logic to be implemented in Python.☆672Mar 19, 2026Updated last week
- Triton Python, C++ and Java client libraries, and GRPC-generated client examples for go, java and scala.☆686Mar 10, 2026Updated 2 weeks ago
- The Triton backend for the PyTorch TorchScript models.☆174Mar 16, 2026Updated last week
- The Triton backend for the ONNX Runtime.☆172Mar 18, 2026Updated last week
- DigitalOcean Gradient AI Platform • AdBuild production-ready AI agents using customizable tools or access multiple LLMs through a single endpoint. Create custom knowledge bases or connect external data.
- Triton Model Analyzer is a CLI tool to help with better understanding of the compute and memory requirements of the Triton Inference Serv…☆507Mar 18, 2026Updated last week
- The Triton Inference Server provides an optimized cloud and edge inferencing solution.☆10,446Mar 20, 2026Updated last week
- The Triton backend for TensorRT.☆87Mar 10, 2026Updated 2 weeks ago
- Common source, scripts and utilities shared across all Triton repositories.☆79Updated this week
- ☆334Mar 17, 2026Updated last week
- The Triton backend for TensorFlow.☆55Nov 22, 2025Updated 4 months ago
- The Triton TensorRT-LLM Backend☆929Mar 17, 2026Updated last week
- Triton Model Navigator is an inference toolkit designed for optimizing and deploying Deep Learning models with a focus on NVIDIA GPUs.☆220Feb 3, 2026Updated last month
- ☆413Nov 11, 2023Updated 2 years ago
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click and start building anything your business needs.
- This repository contains tutorials and examples for Triton Inference Server☆826Mar 10, 2026Updated 2 weeks ago
- The Triton backend that allows running GPU-accelerated data pre-processing pipelines implemented in DALI's python API.☆141Mar 16, 2026Updated last week
- PyTriton is a Flask/FastAPI-like interface that simplifies Triton's deployment in Python environments.☆838Aug 13, 2025Updated 7 months ago
- Transformer related optimization, including BERT, GPT☆6,400Mar 27, 2024Updated 2 years ago
- TensorRT LLM provides users with an easy-to-use Python API to define Large Language Models (LLMs) and supports state-of-the-art optimizat…☆13,169Updated this week
- OpenVINO backend for Triton.☆37Mar 18, 2026Updated last week
- NVIDIA® TensorRT™ is an SDK for high-performance deep learning inference on NVIDIA GPUs. This repository contains the open source compone…☆12,829Updated this week
- A library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit and 4-bit floating point (FP8 and FP4) precision on H…☆3,231Updated this week
- ☆22Mar 10, 2026Updated 2 weeks ago
- Simple, predictable pricing with DigitalOcean hosting • AdAlways know what you'll pay with monthly caps and flat pricing. Enterprise-grade infrastructure trusted by 600k+ customers.
- NVIDIA Inference Xfer Library (NIXL)☆945Mar 20, 2026Updated last week
- Development repository for the Triton language and compiler☆18,708Updated this week
- Standardized Distributed Generative and Predictive AI Inference Platform for Scalable, Multi-Framework Deployment on Kubernetes☆5,257Updated this week
- FlashInfer: Kernel Library for LLM Serving☆5,194Updated this week
- SGLang Kernel Wheel Index☆17Updated this week
- A unified library of SOTA model optimization techniques like quantization, pruning, distillation, speculative decoding, etc. It compresse…☆2,218Updated this week
- Benchmark tests supporting the TiledCUDA library.☆18Nov 19, 2024Updated last year
- This repository serves as an example of deploying the YOLO models on Triton Server for performance and testing purposes☆69Oct 20, 2025Updated 5 months ago
- ☆57Oct 17, 2023Updated 2 years ago
- Simple, predictable pricing with DigitalOcean hosting • AdAlways know what you'll pay with monthly caps and flat pricing. Enterprise-grade infrastructure trusted by 600k+ customers.
- Serve, optimize and scale PyTorch models in production☆4,360Aug 6, 2025Updated 7 months ago
- Unofficial golang package for the Triton Inference Server(https://github.com/triton-inference-server/server)☆50Mar 20, 2026Updated last week
- AI Router☆14Aug 1, 2024Updated last year
- LMDeploy is a toolkit for compressing, deploying, and serving LLMs.☆7,711Updated this week
- CV-CUDA™ is an open-source, GPU accelerated library for cloud-scale image processing and computer vision.☆2,666Jan 22, 2026Updated 2 months ago
- Using TensorRT and Triton Server to build BERT model as a service☆13Jan 10, 2022Updated 4 years ago
- CUDA Templates and Python DSLs for High-Performance Linear Algebra☆9,484Mar 18, 2026Updated last week