triton-inference-server / tensorflow_backendLinks
The Triton backend for TensorFlow.
☆57Updated 2 months ago
Alternatives and similar repositories for tensorflow_backend
Users that are interested in tensorflow_backend are comparing it to the libraries listed below
Sorting:
- The Triton backend for the ONNX Runtime.☆171Updated last week
- Common source, scripts and utilities shared across all Triton repositories.☆79Updated last week
- Common source, scripts and utilities for creating Triton backends.☆366Updated last week
- The Triton backend for the PyTorch TorchScript models.☆171Updated last week
- The core library and APIs implementing the Triton Inference Server.☆163Updated last week
- Triton Model Analyzer is a CLI tool to help with better understanding of the compute and memory requirements of the Triton Inference Serv…☆503Updated last week
- The Triton backend for TensorRT.☆84Updated this week
- ☆130Updated last week
- Triton Python, C++ and Java client libraries, and GRPC-generated client examples for go, java and scala.☆676Updated last week
- Triton Model Navigator is an inference toolkit designed for optimizing and deploying Deep Learning models with a focus on NVIDIA GPUs.☆215Updated 9 months ago
- FIL backend for the Triton Inference Server☆87Updated last week
- Triton backend that enables pre-process, post-processing and other logic to be implemented in Python.☆667Updated this week
- OpenVINO backend for Triton.☆36Updated last week
- Triton CLI is an open source command line interface that enables users to create, deploy, and profile models served by the Triton Inferen…☆73Updated last week
- ☆327Updated this week
- The Triton backend that allows running GPU-accelerated data pre-processing pipelines implemented in DALI's python API.☆140Updated this week
- ☆36Updated last year
- ☆22Updated last week
- ☆25Updated 2 years ago
- ☆33Updated 3 years ago
- Pretrain, finetune and serve LLMs on Intel platforms with Ray☆131Updated 4 months ago
- Common utilities for ONNX converters☆292Updated last month
- MLFlow Deployment Plugin for Ray Serve☆46Updated 3 years ago
- Model compression for ONNX☆99Updated last year
- Triton backend for managing the model state tensors automatically in sequence batcher☆17Updated last year
- ☆70Updated 2 years ago
- Dynamic batching library for Deep Learning inference. Tutorials for LLM, GPT scenarios.☆106Updated last year
- Provide Python access to the NVML library for GPU diagnostics☆258Updated 4 months ago
- ClearML - Model-Serving Orchestration and Repository Solution☆161Updated 3 weeks ago
- Computation using data flow graphs for scalable machine learning☆68Updated this week