openvinotoolkit / model_serverLinks
A scalable inference server for models optimized with OpenVINO™
☆742Updated this week
Alternatives and similar repositories for model_server
Users that are interested in model_server are comparing it to the libraries listed below
Sorting:
- Intel® AI Reference Models: contains Intel optimizations for running deep learning workloads on Intel® Xeon® Scalable processors and Inte…☆715Updated this week
- TensorFlow/TensorRT integration☆743Updated last year
- DL Streamer is now part of Open Edge Platform, for latest updates and releases please visit new repo: https://github.com/open-edge-platfo…☆562Updated last month
- Triton Python, C++ and Java client libraries, and GRPC-generated client examples for go, java and scala.☆632Updated this week
- Common source, scripts and utilities for creating Triton backends.☆330Updated last week
- Triton Model Analyzer is a CLI tool to help with better understanding of the compute and memory requirements of the Triton Inference Serv…☆479Updated last month
- Convert tf.keras/Keras models to ONNX☆380Updated 3 years ago
- Repository for OpenVINO's extra modules☆130Updated last week
- The Triton backend that allows running GPU-accelerated data pre-processing pipelines implemented in DALI's python API.☆136Updated this week
- Run Generative AI models with simple C++/Python API and using OpenVINO Runtime☆301Updated this week
- The framework to generate a Dockerfile, build, test, and deploy a docker image with OpenVINO™ toolkit.☆67Updated last week
- ONNXMLTools enables conversion of models to ONNX☆1,094Updated last month
- 🤗 Optimum Intel: Accelerate inference with Intel optimization tools☆475Updated this week
- Train, Evaluate, Optimize, Deploy Computer Vision Models via OpenVINO™☆1,194Updated this week
- Common utilities for ONNX converters☆274Updated last week
- Explore the Capabilities of the TensorRT Platform☆264Updated 3 years ago
- Triton backend that enables pre-process, post-processing and other logic to be implemented in Python.☆622Updated this week
- Triton Model Navigator is an inference toolkit designed for optimizing and deploying Deep Learning models with a focus on NVIDIA GPUs.☆206Updated 2 months ago
- OpenVINO™ integration with TensorFlow☆179Updated last year
- Dockerfiles and scripts for ONNX container images☆137Updated 2 years ago
- Sample apps to demonstrate how to deploy models trained with TAO on DeepStream☆419Updated 4 months ago
- A performant and modular runtime for TensorFlow☆758Updated 2 months ago
- onnxruntime-extensions: A specialized pre- and post- processing library for ONNX Runtime☆397Updated this week
- PyTriton is a Flask/FastAPI-like interface that simplifies Triton's deployment in Python environments.☆804Updated last week
- Computation using data flow graphs for scalable machine learning☆67Updated this week
- Neural Network Compression Framework for enhanced OpenVINO™ inference☆1,055Updated this week
- Examples for using ONNX Runtime for model training.☆338Updated 8 months ago
- ONNX Optimizer☆727Updated this week
- This repository deploys YOLOv4 as an optimized TensorRT engine to Triton Inference Server☆285Updated 3 years ago
- Inference Model Manager for Kubernetes☆46Updated 6 years ago