deepjavalibrary / djl-servingLinks
A universal scalable machine learning model deployment solution
☆243Updated this week
Alternatives and similar repositories for djl-serving
Users that are interested in djl-serving are comparing it to the libraries listed below
Sorting:
- ☆111Updated 11 months ago
- Training and inference on AWS Trainium and Inferentia chips.☆253Updated this week
- Example code for AWS Neuron SDK developers building inference and training applications☆152Updated last week
- Triton backend that enables pre-process, post-processing and other logic to be implemented in Python.☆662Updated last week
- Triton Python, C++ and Java client libraries, and GRPC-generated client examples for go, java and scala.☆669Updated 2 weeks ago
- The Triton TensorRT-LLM Backend☆909Updated last week
- Powering AWS purpose-built machine learning chips. Blazing fast and cost effective, natively integrated into PyTorch and TensorFlow and i…☆569Updated this week
- ☆270Updated 8 months ago
- ☆321Updated last week
- Examples on how to use LangChain and Ray☆232Updated 2 years ago
- A high-throughput and memory-efficient inference and serving engine for LLMs☆25Updated last month
- ☆63Updated last week
- Large Language Model Hosting Container☆90Updated 2 months ago
- 🆕 Find the k-nearest neighbors (k-NN) for your vector data☆205Updated last week
- PyTriton is a Flask/FastAPI-like interface that simplifies Triton's deployment in Python environments.☆833Updated 4 months ago
- LLMPerf is a library for validating and benchmarking LLMs☆1,068Updated last year
- Hands-on workshop for distributed training and hosting on SageMaker☆151Updated last month
- Triton Model Analyzer is a CLI tool to help with better understanding of the compute and memory requirements of the Triton Inference Serv…☆501Updated last week
- The Triton backend for the ONNX Runtime.☆170Updated 2 weeks ago
- This repository contains tutorials and examples for Triton Inference Server☆814Updated 2 weeks ago
- Toolkit for allowing inference and serving with PyTorch on SageMaker. Dockerfiles used for building SageMaker Pytorch Containers are at h…☆142Updated last year
- A high-performance inference system for large language models, designed for production environments.☆489Updated last week
- This is suite of the hands-on training materials that shows how to scale CV, NLP, time-series forecasting workloads with Ray.☆451Updated last year
- ☆57Updated 2 weeks ago
- ☆131Updated this week
- Foundation model benchmarking tool. Run any model on any AWS platform and benchmark for performance across instance type and serving stac…☆254Updated 8 months ago
- Use the two different methods (deepspeed and SageMaker model parallelism library) to fine tune llama model on Sagemaker. Then deploy the …☆24Updated 2 years ago
- Triton Model Navigator is an inference toolkit designed for optimizing and deploying Deep Learning models with a focus on NVIDIA GPUs.☆216Updated 8 months ago
- A helper library to connect into Amazon SageMaker with AWS Systems Manager and SSH (Secure Shell)☆258Updated 5 months ago
- ☆413Updated 2 years ago