Triton backend that enables pre-process, post-processing and other logic to be implemented in Python.
☆673Apr 15, 2026Updated 2 weeks ago
Alternatives and similar repositories for python_backend
Users that are interested in python_backend are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Common source, scripts and utilities for creating Triton backends.☆369Apr 13, 2026Updated 2 weeks ago
- Triton Python, C++ and Java client libraries, and GRPC-generated client examples for go, java and scala.☆688Updated this week
- The Triton Inference Server provides an optimized cloud and edge inferencing solution.☆10,625Updated this week
- This repository contains tutorials and examples for Triton Inference Server☆830Apr 21, 2026Updated last week
- PyTriton is a Flask/FastAPI-like interface that simplifies Triton's deployment in Python environments.☆844Aug 13, 2025Updated 8 months ago
- Deploy to Railway using AI coding agents - Free Credits Offer • AdUse Claude Code, Codex, OpenCode, and more. Autonomous software development now has the infrastructure to match with Railway.
- Triton Model Analyzer is a CLI tool to help with better understanding of the compute and memory requirements of the Triton Inference Serv…☆510Updated this week
- The Triton backend for the ONNX Runtime.☆174Apr 24, 2026Updated last week
- Triton Model Navigator is an inference toolkit designed for optimizing and deploying Deep Learning models with a focus on NVIDIA GPUs.☆221Feb 3, 2026Updated 2 months ago
- The Triton backend for TensorRT.☆87Apr 15, 2026Updated 2 weeks ago
- The Triton backend that allows running GPU-accelerated data pre-processing pipelines implemented in DALI's python API.☆140Apr 8, 2026Updated 3 weeks ago
- ☆341Updated this week
- The Triton TensorRT-LLM Backend☆933Apr 22, 2026Updated last week
- The core library and APIs implementing the Triton Inference Server.☆170Updated this week
- ☆412Nov 11, 2023Updated 2 years ago
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click. Zero configuration with optimized deployments.
- TRITONCACHE implementation of a Redis cache☆17Apr 15, 2026Updated 2 weeks ago
- The Triton backend for TensorFlow.☆56Nov 22, 2025Updated 5 months ago
- The Triton backend for the PyTorch TorchScript models.☆177Apr 24, 2026Updated last week
- OpenVINO backend for Triton.☆37Apr 15, 2026Updated 2 weeks ago
- Common source, scripts and utilities shared across all Triton repositories.☆79Apr 24, 2026Updated last week
- Triton CLI is an open source command line interface that enables users to create, deploy, and profile models served by the Triton Inferen…☆74Apr 15, 2026Updated 2 weeks ago
- Serve, optimize and scale PyTorch models in production☆4,359Aug 6, 2025Updated 8 months ago
- NVIDIA® TensorRT™ is an SDK for high-performance deep learning inference on NVIDIA GPUs. This repository contains the open source compone…☆12,947Apr 13, 2026Updated 2 weeks ago
- This repository deploys YOLOv4 as an optimized TensorRT engine to Triton Inference Server☆284Jun 2, 2022Updated 3 years ago
- Deploy open-source AI quickly and easily - Special Bonus Offer • AdRunpod Hub is built for open source. One-click deployment and autoscaling endpoints without provisioning your own infrastructure.
- Standardized Distributed Generative and Predictive AI Inference Platform for Scalable, Multi-Framework Deployment on Kubernetes☆5,395Apr 24, 2026Updated last week
- Efficient, scalable and enterprise-grade CPU/GPU inference server for 🤗 Hugging Face transformer models 🚀☆1,687Oct 23, 2024Updated last year
- TensorRT LLM provides users with an easy-to-use Python API to define Large Language Models (LLMs) and supports state-of-the-art optimizat…☆13,487Updated this week
- Transformer related optimization, including BERT, GPT☆6,412Mar 27, 2024Updated 2 years ago
- FIL backend for the Triton Inference Server☆90Updated this week
- A client library in Rust for Nvidia Triton.☆31Aug 3, 2023Updated 2 years ago
- PyTorch/TorchScript/FX compiler for NVIDIA GPUs using TensorRT☆2,968Apr 25, 2026Updated last week
- Integrating SSE with NVIDIA Triton Inference Server using a Python backend and Zephyr model. There is very less documentation how to use …☆10May 29, 2024Updated last year
- ONNX-TensorRT: TensorRT backend for ONNX☆3,204Mar 25, 2026Updated last month
- Bare Metal GPUs on DigitalOcean Gradient AI • AdPurpose-built for serious AI teams training foundational models, running large-scale inference, and pushing the boundaries of what's possible.
- Tiny configuration for Triton Inference Server☆45Jan 10, 2025Updated last year
- Triton backend for https://github.com/OpenNMT/CTranslate2☆35Jul 7, 2023Updated 2 years ago
- ☆57Oct 17, 2023Updated 2 years ago
- 🚀 Accelerate inference and training of 🤗 Transformers, Diffusers, TIMM and Sentence Transformers with easy to use hardware optimization…☆3,376Updated this week
- Simplify your onnx model☆4,328Updated this week
- Development repository for the Triton language and compiler☆19,087Updated this week
- Whisper in TensorRT-LLM☆17Sep 21, 2023Updated 2 years ago