FerrariDG / async-ml-inferenceLinks
PoC with FastAPI and Celery to ML inference
☆90Updated 2 years ago
Alternatives and similar repositories for async-ml-inference
Users that are interested in async-ml-inference are comparing it to the libraries listed below
Sorting:
- Working example for serving a ML model using FastAPI and Celery☆75Updated 3 years ago
- Deploy and scale machine learning models with FastAPI, Redis and Docker☆150Updated 3 years ago
- Completely Scalable FastAPI based template for Machine Learning, Deep Learning and any other software project which wants to use Fast API…☆226Updated last month
- Deploying PyTorch Model to Production with FastAPI in CUDA-supported Docker☆103Updated 3 years ago
- A demo of Prometheus+Grafana for monitoring an ML model served with FastAPI.☆233Updated last year
- Management Dashboard for Torchserve