The Triton Inference Server provides an optimized cloud and edge inferencing solution.
☆10,446Mar 20, 2026Updated this week
Alternatives and similar repositories for server
Users that are interested in server are comparing it to the libraries listed below
Sorting:
- NVIDIA® TensorRT™ is an SDK for high-performance deep learning inference on NVIDIA GPUs. This repository contains the open source compone…☆12,800Mar 9, 2026Updated last week
- Triton Python, C++ and Java client libraries, and GRPC-generated client examples for go, java and scala.☆686Mar 10, 2026Updated last week
- TensorRT LLM provides users with an easy-to-use Python API to define Large Language Models (LLMs) and supports state-of-the-art optimizat…☆13,120Updated this week
- Transformer related optimization, including BERT, GPT☆6,397Mar 27, 2024Updated last year
- Development repository for the Triton language and compiler☆18,656Mar 14, 2026Updated last week
- Triton backend that enables pre-process, post-processing and other logic to be implemented in Python.☆672Mar 10, 2026Updated last week
- Standardized Distributed Generative and Predictive AI Inference Platform for Scalable, Multi-Framework Deployment on Kubernetes☆5,216Updated this week
- Serve, optimize and scale PyTorch models in production☆4,362Aug 6, 2025Updated 7 months ago
- A high-throughput and memory-efficient inference and serving engine for LLMs☆73,479Updated this week
- DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.☆41,807Mar 13, 2026Updated last week
- ONNX-TensorRT: TensorRT backend for ONNX☆3,194Feb 3, 2026Updated last month
- Triton Model Analyzer is a CLI tool to help with better understanding of the compute and memory requirements of the Triton Inference Serv…☆507Updated this week
- PyTorch/TorchScript/FX compiler for NVIDIA GPUs using TensorRT☆2,958Updated this week
- This repository contains tutorials and examples for Triton Inference Server☆823Mar 10, 2026Updated last week
- Ongoing research training transformer models at scale☆15,744Updated this week
- SGLang is a high-performance serving framework for large language models and multimodal models.☆24,455Updated this week
- Open Machine Learning Compiler Framework☆13,197Updated this week
- ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator☆19,568Updated this week
- Common source, scripts and utilities for creating Triton backends.☆369Mar 10, 2026Updated last week
- PyTriton is a Flask/FastAPI-like interface that simplifies Triton's deployment in Python environments.☆835Aug 13, 2025Updated 7 months ago
- An easy to use PyTorch to TensorRT converter☆4,858Aug 17, 2024Updated last year
- Fast and memory-efficient exact attention☆22,832Updated this week
- A GPU-accelerated library containing highly optimized building blocks and an execution engine for data processing to accelerate deep lear…☆5,642Mar 13, 2026Updated last week
- Large Language Model Text Generation Inference☆10,803Jan 8, 2026Updated 2 months ago
- Ray is an AI compute engine. Ray consists of a core distributed runtime and a set of AI Libraries for accelerating ML workloads.☆41,799Updated this week
- Distributed training framework for TensorFlow, Keras, PyTorch, and Apache MXNet.☆14,679Dec 1, 2025Updated 3 months ago
- A library for efficient similarity search and clustering of dense vectors.☆39,403Updated this week
- Open standard for machine learning interoperability☆20,484Updated this week
- LMDeploy is a toolkit for compressing, deploying, and serving LLMs.☆7,694Mar 13, 2026Updated last week
- A scalable generative AI framework built for researchers and developers working on Large Language Models, Multimodal, and Speech AI (Auto…☆16,918Updated this week
- State-of-the-Art Deep Learning scripts organized by models - easy to train and deploy with reproducible accuracy and performance on enter…☆14,745Aug 12, 2024Updated last year
- The easiest way to serve AI apps and models - Build Model Inference APIs, Job queues, LLM apps, Multi-model pipelines, and more!☆8,520Updated this week
- A PyTorch Extension: Tools for easy mixed precision and distributed training in Pytorch☆8,936Updated this week
- Accessible large language models via k-bit quantization for PyTorch.☆8,052Updated this week
- FlashInfer: Kernel Library for LLM Serving☆5,145Updated this week
- Visualizer for neural network, deep learning and machine learning models☆32,592Updated this week
- A Datacenter Scale Distributed Inference Serving Framework☆6,347Updated this week
- The Triton TensorRT-LLM Backend☆926Updated this week
- CUDA Templates and Python DSLs for High-Performance Linear Algebra☆9,442Updated this week