Curt-Park / mnist-fastapi-aio-tritonLinks
Simple example of FastAPI + gRPC AsyncIO + Triton
☆67Updated 3 years ago
Alternatives and similar repositories for mnist-fastapi-aio-triton
Users that are interested in mnist-fastapi-aio-triton are comparing it to the libraries listed below
Sorting:
- Simple example of FastAPI + Celery + Triton for benchmarking☆64Updated 3 years ago
- The Triton backend for TensorRT.☆78Updated last week
- ☆33Updated 3 years ago
- The Triton backend that allows running GPU-accelerated data pre-processing pipelines implemented in DALI's python API.☆138Updated last week
- The Triton backend for the ONNX Runtime.☆161Updated last week
- ☆296Updated last week
- Triton Model Navigator is an inference toolkit designed for optimizing and deploying Deep Learning models with a focus on NVIDIA GPUs.☆211Updated 4 months ago
- ☆32Updated 2 years ago
- Triton backend for https://github.com/OpenNMT/CTranslate2☆35Updated 2 years ago
- Integrating SSE with NVIDIA Triton Inference Server using a Python backend and Zephyr model. There is very less documentation how to use …☆10Updated last year
- PyTriton is a Flask/FastAPI-like interface that simplifies Triton's deployment in Python environments.☆816Updated last month
- showing various ways to serve Keras based stable diffusion☆111Updated 2 years ago
- Magface Triton Inferece Server Using Tensorrt☆17Updated 3 years ago
- Plugin for deploying MLflow models to TorchServe☆110Updated 2 years ago
- ✨ Beautiful OCR Project Team Code by Team DKT☆12Updated 4 years ago
- Converting weights of Pytorch models to ONNX & TensorRT engines☆50Updated 2 years ago
- ☆11Updated last year
- A set of demo of deploying a Machine Learning Model in production using various methods☆60Updated 3 years ago
- Tiny configuration for Triton Inference Server☆45Updated 8 months ago
- Archives for Triton Inference Server Practices☆15Updated 3 years ago
- Easy and Efficient Quantization for Transformers☆203Updated 2 months ago
- This project shows how to serve an ONNX-optimized image classification model as a web service with FastAPI, Docker, and Kubernetes.☆220Updated 3 years ago
- The Triton backend for the PyTorch TorchScript models.☆159Updated last week
- Torchserve + TensorRT + Detection☆19Updated 3 years ago
- Using open-source LLM Llama2 by Meta on local CPU inference for document question-and-answer☆15Updated last year
- Triton Model Analyzer is a CLI tool to help with better understanding of the compute and memory requirements of the Triton Inference Serv…☆490Updated last week
- This is a repo with a Triton Server deployment template☆24Updated last year
- Common source, scripts and utilities for creating Triton backends.☆347Updated last week
- Count GitHub Stars ⭐☆30Updated 4 months ago
- Triton CLI is an open source command line interface that enables users to create, deploy, and profile models served by the Triton Inferen…☆67Updated last week