Curt-Park / mnist-fastapi-aio-triton
Simple example of FastAPI + gRPC AsyncIO + Triton
☆59Updated 2 years ago
Related projects ⓘ
Alternatives and complementary repositories for mnist-fastapi-aio-triton
- Simple example of FastAPI + Celery + Triton for benchmarking☆61Updated 2 years ago
- Tiny configuration for Triton Inference Server☆43Updated this week
- ☆30Updated 2 years ago
- Archives for Triton Inference Server Practices☆15Updated 2 years ago
- ✨ Beautiful OCR Project Team Code by Team DKT☆12Updated 3 years ago
- Deploy stable diffusion model with onnx/tenorrt + tritonserver☆123Updated last year
- Magface Triton Inferece Server Using Tensorrt☆15Updated 2 years ago
- Inference API server with echo and gRPC to triton server (golang)☆12Updated 2 years ago
- This project shows how to serve an TF based image classification model as a web service with TFServing, Docker, and Kubernetes(GKE).☆120Updated 2 years ago
- Converting weights of Pytorch models to ONNX & TensorRT engines☆46Updated last year
- showing various ways to serve Keras based stable diffusion☆109Updated last year
- The Triton backend for TensorRT.☆64Updated this week
- The Triton backend that allows running GPU-accelerated data pre-processing pipelines implemented in DALI's python API.☆125Updated 2 weeks ago
- Advanced inference pipeline using NVIDIA Triton Inference Server for CRAFT Text detection (Pytorch), included converter from Pytorch -> O…☆32Updated 3 years ago
- ☆29Updated 3 years ago
- ☆46Updated 3 years ago
- ☆96Updated 2 years ago
- Triton backend for https://github.com/OpenNMT/CTranslate2☆32Updated last year
- This repository serves as an example of deploying the YOLO models on Triton Server for performance and testing purposes☆44Updated 5 months ago
- The Triton backend for the ONNX Runtime.☆133Updated this week
- Using open-source LLM Llama2 by Meta on local CPU inference for document question-and-answer☆15Updated last year
- Serving Example of CodeGen-350M-Mono-GPTJ on Triton Inference Server with Docker and Kubernetes☆20Updated last year
- Triton Model Navigator is an inference toolkit designed for optimizing and deploying Deep Learning models with a focus on NVIDIA GPUs.☆185Updated 2 months ago
- Python Project Template☆67Updated 2 years ago
- The Triton backend for the PyTorch TorchScript models.☆127Updated this week
- This is a repo with a Triton Server deployment template☆23Updated 3 months ago
- ☆31Updated last year
- This repo gives an introduction to how to make full working example to serve your model using asynchronous Celery tasks and FastAPI. 🔥 …☆26Updated 6 months ago
- This project shows how to serve an ONNX-optimized image classification model as a web service with FastAPI, Docker, and Kubernetes.☆197Updated 2 years ago