PyTriton is a Flask/FastAPI-like interface that simplifies Triton's deployment in Python environments.
☆835Aug 13, 2025Updated 6 months ago
Alternatives and similar repositories for pytriton
Users that are interested in pytriton are comparing it to the libraries listed below
Sorting:
- Triton Model Navigator is an inference toolkit designed for optimizing and deploying Deep Learning models with a focus on NVIDIA GPUs.☆217Feb 3, 2026Updated last month
- Triton Model Analyzer is a CLI tool to help with better understanding of the compute and memory requirements of the Triton Inference Serv…☆506Updated this week
- Triton backend that enables pre-process, post-processing and other logic to be implemented in Python.☆673Updated this week
- The Triton Inference Server provides an optimized cloud and edge inferencing solution.☆10,406Updated this week
- This repository contains tutorials and examples for Triton Inference Server☆824Feb 9, 2026Updated last month
- Triton Python, C++ and Java client libraries, and GRPC-generated client examples for go, java and scala.☆684Feb 24, 2026Updated last week
- 🚀 Accelerate inference and training of 🤗 Transformers, Diffusers, TIMM and Sentence Transformers with easy to use hardware optimization…☆3,310Feb 9, 2026Updated last month
- The Triton TensorRT-LLM Backend☆926Updated this week
- Large Language Model Text Generation Inference☆10,795Jan 8, 2026Updated 2 months ago
- PyTorch/TorchScript/FX compiler for NVIDIA GPUs using TensorRT☆2,956Updated this week
- Transformer related optimization, including BERT, GPT☆6,398Mar 27, 2024Updated last year
- ☆332Feb 9, 2026Updated last month
- A library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit and 4-bit floating point (FP8 and FP4) precision on H…☆3,187Updated this week
- ☆413Nov 11, 2023Updated 2 years ago
- Efficient, scalable and enterprise-grade CPU/GPU inference server for 🤗 Hugging Face transformer models 🚀☆1,688Oct 23, 2024Updated last year
- The Triton backend for the PyTorch TorchScript models.☆173Updated this week
- TensorRT LLM provides users with an easy-to-use Python API to define Large Language Models (LLMs) and supports state-of-the-art optimizat…☆12,993Updated this week
- MII makes low-latency and high-throughput inference possible, powered by DeepSpeed.☆2,097Jun 30, 2025Updated 8 months ago
- AITemplate is a Python framework which renders neural network into high performance CUDA/HIP C++ code. Specialized for FP16 TensorCore (N…☆4,706Feb 27, 2026Updated last week
- Common source, scripts and utilities for creating Triton backends.☆370Feb 9, 2026Updated last month
- 언어모델을 학습하기 위한 공개 한국어 instruction dataset들을 모아두었습니다.☆19Jul 16, 2023Updated 2 years ago
- Serve, optimize and scale PyTorch models in production☆4,360Aug 6, 2025Updated 7 months ago
- Development repository for the Triton language and compiler☆18,573Updated this week
- Accessible large language models via k-bit quantization for PyTorch.☆8,019Updated this week
- Triton backend for managing the model state tensors automatically in sequence batcher☆17Feb 12, 2024Updated 2 years ago
- PyTorch native quantization and sparsity for training and inference☆2,722Updated this week
- Kernl lets you run PyTorch transformer models several times faster on GPU with a single line of code, and is designed to be easily hackab…☆1,585Jan 28, 2026Updated last month
- Standardized Distributed Generative and Predictive AI Inference Platform for Scalable, Multi-Framework Deployment on Kubernetes☆5,162Updated this week
- NVIDIA® TensorRT™ is an SDK for high-performance deep learning inference on NVIDIA GPUs. This repository contains the open source compone…☆12,753Updated this week
- Fast and memory-efficient exact attention☆22,460Updated this week
- Simple, safe way to store and distribute tensors☆3,656Updated this week
- The easiest way to serve AI apps and models - Build Model Inference APIs, Job queues, LLM apps, Multi-model pipelines, and more!☆8,487Updated this week
- The core library and APIs implementing the Triton Inference Server.☆170Updated this week
- The Triton backend for the ONNX Runtime.☆173Updated this week
- ☆22Feb 9, 2026Updated last month
- Module, Model, and Tensor Serialization/Deserialization☆290Feb 6, 2026Updated last month
- 🚀 A simple way to launch, train, and use PyTorch models on almost any device and distributed configuration, automatic mixed precision (i…☆9,528Updated this week
- Large-scale language modeling tutorials with PyTorch☆292Nov 2, 2021Updated 4 years ago
- S-LoRA: Serving Thousands of Concurrent LoRA Adapters☆1,900Jan 21, 2024Updated 2 years ago