nickaggarwal / nvidia-triton-llm-streamingLinks
Integrating SSE with NVIDIA Triton Inference Server using a Python backend and Zephyr model. There is very less documentation how to use Nvidia Triton in Streaming use-cases ( hard to find in their docs ), hence this should be helpful for people who want to deploy streaming with Triton
☆10Updated last year
Alternatives and similar repositories for nvidia-triton-llm-streaming
Users that are interested in nvidia-triton-llm-streaming are comparing it to the libraries listed below
Sorting:
- Triton backend for https://github.com/OpenNMT/CTranslate2☆35Updated 2 years ago
- ☆286Updated last week
- Triton backend for https://github.com/OpenNMT/CTranslate2☆11Updated 11 months ago
- OpenAI compatible API for TensorRT LLM triton backend☆209Updated last year
- PyTriton is a Flask/FastAPI-like interface that simplifies Triton's deployment in Python environments.☆814Updated last week
- The Triton backend for TensorRT.☆77Updated last week
- The Triton backend for the ONNX Runtime.☆157Updated this week
- This repository contains tutorials and examples for Triton Inference Server☆751Updated last week
- Triton Model Navigator is an inference toolkit designed for optimizing and deploying Deep Learning models with a focus on NVIDIA GPUs.☆211Updated 3 months ago
- Triton CLI is an open source command line interface that enables users to create, deploy, and profile models served by the Triton Inferen…☆66Updated last week
- Simple example of FastAPI + gRPC AsyncIO + Triton☆67Updated 2 years ago
- Triton Model Analyzer is a CLI tool to help with better understanding of the compute and memory requirements of the Triton Inference Serv…☆483Updated last week
- 🏋️ A unified multi-backend utility for benchmarking Transformers, Timm, PEFT, Diffusers and Sentence-Transformers with full support of O…☆307Updated 2 months ago
- ☆99Updated last week
- A high-throughput and memory-efficient inference and serving engine for LLMs☆266Updated 10 months ago
- Triton backend that enables pre-process, post-processing and other logic to be implemented in Python.☆629Updated this week
- The Triton TensorRT-LLM Backend☆875Updated this week
- Easy and Efficient Quantization for Transformers☆198Updated last month
- Whisper finetuned on VinBigdata-VLSP2020-100h + KenLM☆38Updated last year
- Common source, scripts and utilities for creating Triton backends.☆337Updated last week
- Getting started with TensorRT-LLM using BLOOM as a case study☆20Updated last year
- ☆32Updated 2 years ago
- NVIDIA Riva runnable tutorials☆141Updated last week
- Dynamic batching library for Deep Learning inference. Tutorials for LLM, GPT scenarios.☆102Updated 11 months ago
- Transformation spoken text to written text☆30Updated last year
- 🕹️ Performance Comparison of MLOps Engines, Frameworks, and Languages on Mainstream AI Models.☆137Updated last year
- Triton Python, C++ and Java client libraries, and GRPC-generated client examples for go, java and scala.☆639Updated this week
- ONNX and TensorRT implementation of Whisper☆64Updated 2 years ago
- A tool to configure, launch and manage your machine learning experiments.☆176Updated this week
- A high-throughput and memory-efficient inference and serving engine for Whisper, https://mesolitica.com/blog/vllm-whisper☆29Updated last year