aws-samples / sagemaker-cv-preprocessing-training-performance
SageMaker training implementation for computer vision to offload JPEG decoding and augmentations on GPUs using NVIDIA DALI — allowing you to compare and reduce training time by addressing CPU bottlenecks caused by increasing data pre-processing load. Performance bottlenecks identified with SageMaker Debugger.
☆21Updated 3 years ago
Alternatives and similar repositories for sagemaker-cv-preprocessing-training-performance:
Users that are interested in sagemaker-cv-preprocessing-training-performance are comparing it to the libraries listed below
- ☆20Updated 2 years ago
- Distributed training with SageMaker's script mode using Horovod distributed deep learning framework☆32Updated 5 years ago
- How to deploy TorchServe on an Amazon EKS cluster for inference.☆12Updated 4 years ago
- Deploy Machine Learning Pipeline on AWS Fargate☆13Updated 2 years ago
- Toolkit for allowing inference and serving with MXNet in SageMaker. Dockerfiles used for building SageMaker MXNet Containers are at https…☆28Updated last year
- Build a Docker container to build, train and deploy fast.ai based Deep Learning models with Amazon SageMaker☆13Updated 6 years ago
- Starter template for image recognition server with FastAPI☆24Updated 4 years ago
- This repository shows how to train an object detection algorithm with Detectron2 on Amazon SageMaker☆28Updated 3 years ago
- My solution to the Global Data Science Challenge☆36Updated 4 years ago
- A PyTorch only inference wrapper for fastai☆12Updated last year
- Hosting code-server on Amazon SageMaker☆54Updated last year
- AutoGluon Docker☆13Updated 4 years ago
- A high performance data access library for machine learning tasks☆74Updated last year
- Examples showing use of NGC containers and models withing Amazon SageMaker☆17Updated 2 years ago
- Repo for work on deep learning for tabular data☆14Updated 4 years ago
- A fastai-free onnx implementation for fastai☆12Updated last year
- Sample code for parallelizing across multiple CPU/GPUs on a single machine to speed up deep learning inference☆33Updated 4 years ago
- This repository is part of a blog post that guides users through creating a visual search application using Amazon SageMaker and Amazon E…☆11Updated last year
- Deploy FastAI Trained PyTorch Model in TorchServe and Host in Amazon SageMaker Inference Endpoint☆74Updated 3 years ago
- Serve scikit-learn, XGBoost, TensorFlow, and PyTorch models with AWS Lambda container images support.☆98Updated 7 months ago
- All notebook for FastAI learning purposes.☆15Updated 5 years ago
- BERT model as a serverless service☆20Updated 3 years ago
- Visual search implementation resources, including an explanatory Jupyter notebook and Amazon SageMaker and AWS DeepLens code.☆63Updated 3 years ago
- Examples of AI Accelerators - GPU, AWS Inferential and Elastic Inference☆32Updated 4 years ago
- Experiments with self-supervised learning☆11Updated 5 years ago
- This project presents a simple framework to retrieve images similar to a query image.☆28Updated 3 years ago
- From nothing to a deployed object detection model on SageMaker with Detectron2☆29Updated last year
- A Starlette example for deployment in fastai2☆11Updated 4 years ago
- FasterAI: A repository for making smaller and faster models with the FastAI library.☆35Updated last year
- Dockerfile for deep learning on GPUs☆10Updated 6 years ago