Curt-Park / mnist-fastapi-celery-tritonLinks
Simple example of FastAPI + Celery + Triton for benchmarking
☆64Updated 3 years ago
Alternatives and similar repositories for mnist-fastapi-celery-triton
Users that are interested in mnist-fastapi-celery-triton are comparing it to the libraries listed below
Sorting:
- Simple example of FastAPI + gRPC AsyncIO + Triton☆67Updated 3 years ago
- ☆46Updated 4 years ago
- ✨ Beautiful OCR Project Team Code by Team DKT☆12Updated 4 years ago
- Archives for Triton Inference Server Practices☆15Updated 3 years ago
- This project shows how to serve an TF based image classification model as a web service with TFServing, Docker, and Kubernetes(GKE).☆125Updated 3 years ago
- ☆96Updated 3 years ago
- Making a PyTorch model easier than ever!☆79Updated 3 years ago
- ☆156Updated 2 years ago
- Tiny configuration for Triton Inference Server☆45Updated 7 months ago
- Python Project Template☆68Updated 3 years ago
- Getting GPU Util 99%☆34Updated 4 years ago
- showing various ways to serve Keras based stable diffusion☆111Updated 2 years ago
- [KOREAN] Code for generating synthetic text images as described in "Synthetic Data for Text Localisation in Natural Images", Ankush Gupta…☆32Updated 5 years ago
- CLEval: Character-Level Evaluation for Text Detection and Recognition Tasks☆185Updated last year
- Inverse DALL-E for Optical Character Recognition☆38Updated 2 years ago
- A set of demo of deploying a Machine Learning Model in production using various methods☆60Updated 3 years ago
- Inference API server with echo and gRPC to triton server (golang)☆13Updated 2 years ago
- This is a repo with a Triton Server deployment template☆24Updated last year
- ☆31Updated 3 years ago
- ☆22Updated 6 years ago
- Reproduction of Vision Transformer in Tensorflow2. Train from scratch and Finetune.☆48Updated 3 years ago
- Machine Learning Pipeline for Semantic Segmentation with TensorFlow Extended (TFX) and various GCP products☆95Updated 2 years ago
- ☆18Updated 2 years ago
- Serving Example of CodeGen-350M-Mono-GPTJ on Triton Inference Server with Docker and Kubernetes☆20Updated 2 years ago
- Automatic Mixed Precision Tutorials using pytorch. Based on PyTorch 1.6 Official Features, implement classification codebase using custo…☆89Updated 4 years ago
- Torchserve server using a YoloV5 model running on docker with GPU and static batch inference to perform production ready and real time in…☆99Updated 2 years ago
- ☆16Updated 3 years ago
- Auto-Magical Deploy AI model at large scale, high performance, and easy to use☆66Updated 2 years ago
- swin-transformer custom for OCR☆115Updated last year
- a math-formula image recognition project which placed at the first place in a competition hosted by NAVER CONNECT boostcamp AI Tech☆11Updated last year