The Triton Inference Server provides an optimized cloud and edge inferencing solution.
☆10,507Apr 2, 2026Updated last week
Alternatives and similar repositories for server
Users that are interested in server are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- NVIDIA® TensorRT™ is an SDK for high-performance deep learning inference on NVIDIA GPUs. This repository contains the open source compone…☆12,877Mar 25, 2026Updated 2 weeks ago
- Triton Python, C++ and Java client libraries, and GRPC-generated client examples for go, java and scala.☆686Updated this week
- TensorRT LLM provides users with an easy-to-use Python API to define Large Language Models (LLMs) and supports state-of-the-art optimizat…☆13,304Updated this week
- Transformer related optimization, including BERT, GPT☆6,410Mar 27, 2024Updated 2 years ago
- Development repository for the Triton language and compiler☆18,840Updated this week
- Virtual machines for every use case on DigitalOcean • AdGet dependable uptime with 99.99% SLA, simple security tools, and predictable monthly pricing with DigitalOcean's virtual machines, called Droplets.
- Triton backend that enables pre-process, post-processing and other logic to be implemented in Python.☆673Mar 19, 2026Updated 3 weeks ago
- Standardized Distributed Generative and Predictive AI Inference Platform for Scalable, Multi-Framework Deployment on Kubernetes☆5,305Updated this week
- Serve, optimize and scale PyTorch models in production☆4,361Aug 6, 2025Updated 8 months ago
- A high-throughput and memory-efficient inference and serving engine for LLMs☆75,637Updated this week
- DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.☆41,977Updated this week
- ONNX-TensorRT: TensorRT backend for ONNX☆3,201Mar 25, 2026Updated 2 weeks ago
- Triton Model Analyzer is a CLI tool to help with better understanding of the compute and memory requirements of the Triton Inference Serv…☆508Mar 30, 2026Updated last week
- PyTorch/TorchScript/FX compiler for NVIDIA GPUs using TensorRT☆2,962Updated this week
- This repository contains tutorials and examples for Triton Inference Server☆828Updated this week
- Bare Metal GPUs on DigitalOcean Gradient AI • AdPurpose-built for serious AI teams training foundational models, running large-scale inference, and pushing the boundaries of what's possible.
- Ongoing research training transformer models at scale☆15,900Apr 3, 2026Updated last week
- SGLang is a high-performance serving framework for large language models and multimodal models.☆25,408Updated this week
- Open Machine Learning Compiler Framework☆13,252Updated this week
- ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator☆19,779Updated this week
- Common source, scripts and utilities for creating Triton backends.☆369Mar 10, 2026Updated 3 weeks ago
- PyTriton is a Flask/FastAPI-like interface that simplifies Triton's deployment in Python environments.☆842Aug 13, 2025Updated 7 months ago
- An easy to use PyTorch to TensorRT converter☆4,863Aug 17, 2024Updated last year
- Fast and memory-efficient exact attention☆23,185Updated this week
- A GPU-accelerated library containing highly optimized building blocks and an execution engine for data processing to accelerate deep lear…☆5,658Apr 2, 2026Updated last week
- Managed Kubernetes at scale on DigitalOcean • AdDigitalOcean Kubernetes includes the control plane, bandwidth allowance, container registry, automatic updates, and more for free.
- Large Language Model Text Generation Inference☆10,817Mar 21, 2026Updated 2 weeks ago
- Ray is an AI compute engine. Ray consists of a core distributed runtime and a set of AI Libraries for accelerating ML workloads.☆41,915Apr 2, 2026Updated last week
- Distributed training framework for TensorFlow, Keras, PyTorch, and Apache MXNet.☆14,686Dec 1, 2025Updated 4 months ago
- A library for efficient similarity search and clustering of dense vectors.☆39,628Updated this week
- Open standard for machine learning interoperability☆20,584Updated this week
- LMDeploy is a toolkit for compressing, deploying, and serving LLMs.☆7,755Updated this week
- A scalable generative AI framework built for researchers and developers working on Large Language Models, Multimodal, and Speech AI (Auto…☆17,048Updated this week
- State-of-the-Art Deep Learning scripts organized by models - easy to train and deploy with reproducible accuracy and performance on enter…☆14,768Aug 12, 2024Updated last year
- The easiest way to serve AI apps and models - Build Model Inference APIs, Job queues, LLM apps, Multi-model pipelines, and more!☆8,563Updated this week
- Bare Metal GPUs on DigitalOcean Gradient AI • AdPurpose-built for serious AI teams training foundational models, running large-scale inference, and pushing the boundaries of what's possible.
- A PyTorch Extension: Tools for easy mixed precision and distributed training in Pytorch☆8,939Mar 31, 2026Updated last week
- Accessible large language models via k-bit quantization for PyTorch.☆8,107Updated this week
- FlashInfer: Kernel Library for LLM Serving☆5,273Updated this week
- Visualizer for neural network, deep learning and machine learning models☆32,696Updated this week
- A Datacenter Scale Distributed Inference Serving Framework☆6,470Apr 3, 2026Updated last week
- The Triton TensorRT-LLM Backend☆930Mar 17, 2026Updated 3 weeks ago
- A flexible, high-performance serving system for machine learning models☆6,355Updated this week