triton-inference-server / serverLinks
The Triton Inference Server provides an optimized cloud and edge inferencing solution.
☆9,755Updated this week
Alternatives and similar repositories for server
Users that are interested in server are comparing it to the libraries listed below
Sorting:
- NVIDIA® TensorRT™ is an SDK for high-performance deep learning inference on NVIDIA GPUs. This repository contains the open source compone…☆12,125Updated this week
- PyTorch/TorchScript/FX compiler for NVIDIA GPUs using TensorRT☆2,848Updated last week
- Serve, optimize and scale PyTorch models in production☆4,348Updated last month
- Transformer related optimization, including BERT, GPT☆6,295Updated last year
- Standardized Distributed Generative and Predictive AI Inference Platform for Scalable, Multi-Framework Deployment on Kubernetes☆4,524Updated last week
- An easy to use PyTorch to TensorRT converter☆4,805Updated last year
- Simplify your onnx model☆4,165Updated 2 weeks ago
- ONNX-TensorRT: TensorRT backend for ONNX☆3,149Updated last month
- TensorRT-LLM provides users with an easy-to-use Python API to define Large Language Models (LLMs) and support state-of-the-art optimizati…☆11,531Updated this week
- Ongoing research training transformer models at scale☆13,541Updated this week
- A library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit floating point (FP8) precision on Hopper, Ada and Bla…☆2,720Updated this week
- The easiest way to serve AI apps and models - Build Model Inference APIs, Job queues, LLM apps, Multi-model pipelines, and more!☆8,053Updated this week
- A GPU-accelerated library containing highly optimized building blocks and an execution engine for data processing to accelerate deep lear…☆5,503Updated this week
- ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator☆17,786Updated this week
- PyTorch extensions for high performance and large scale training.☆3,369Updated 4 months ago
- Triton backend that enables pre-process, post-processing and other logic to be implemented in Python.☆638Updated this week
- This repository contains tutorials and examples for Triton Inference Server☆768Updated this week
- Open deep learning compiler stack for cpu, gpu and specialized accelerators☆12,584Updated this week
- Development repository for the Triton language and compiler☆16,831Updated this week
- Convert TensorFlow, Keras, Tensorflow.js and Tflite models to ONNX☆2,469Updated last month
- Fast and memory-efficient exact attention☆19,385Updated last week
- CV-CUDA™ is an open-source, GPU accelerated library for cloud-scale image processing and computer vision.☆2,578Updated 3 months ago
- A scalable generative AI framework built for researchers and developers working on Large Language Models, Multimodal, and Speech AI (Auto…