openvinotoolkit / model_serverLinks
A scalable inference server for models optimized with OpenVINO™
☆739Updated this week
Alternatives and similar repositories for model_server
Users that are interested in model_server are comparing it to the libraries listed below
Sorting:
- Intel® AI Reference Models: contains Intel optimizations for running deep learning workloads on Intel® Xeon® Scalable processors and Inte…☆718Updated last week
- Inference Model Manager for Kubernetes☆46Updated 6 years ago
- A multi-user, distributed computing environment for running DL model training experiments on Intel® Xeon® Scalable processor-based system…☆392Updated last year
- TensorFlow/TensorRT integration☆742Updated last year
- This repository is a home to Intel® Deep Learning Streamer (Intel® DL Streamer) Pipeline Framework. Pipeline Framework is a streaming med…☆561Updated last month
- Run Generative AI models with simple C++/Python API and using OpenVINO Runtime☆293Updated this week
- Repository for OpenVINO's extra modules☆128Updated this week
- Train, Evaluate, Optimize, Deploy Computer Vision Models via OpenVINO™☆1,192Updated last week
- Neural Network Compression Framework for enhanced OpenVINO™ inference☆1,046Updated this week
- 🤗 Optimum Intel: Accelerate inference with Intel optimization tools☆473Updated this week
- Convert tf.keras/Keras models to ONNX☆379Updated 3 years ago
- Common utilities for ONNX converters☆272Updated 6 months ago
- Tensorflow Backend for ONNX☆1,310Updated last year
- Dockerfiles and scripts for ONNX container images☆137Updated 2 years ago
- Triton Python, C++ and Java client libraries, and GRPC-generated client examples for go, java and scala.☆627Updated last week
- Triton Model Analyzer is a CLI tool to help with better understanding of the compute and memory requirements of the Triton Inference Serv…☆477Updated last week
- ONNX Optimizer☆721Updated last week
- Common source, scripts and utilities for creating Triton backends.☆327Updated this week
- ONNXMLTools enables conversion of models to ONNX☆1,088Updated last week
- Tools for easier OpenVINO development/debugging☆9Updated 3 months ago
- The framework to generate a Dockerfile, build, test, and deploy a docker image with OpenVINO™ toolkit.☆66Updated 2 weeks ago
- The Triton backend that allows running GPU-accelerated data pre-processing pipelines implemented in DALI's python API.☆135Updated 2 weeks ago
- Triton Model Navigator is an inference toolkit designed for optimizing and deploying Deep Learning models with a focus on NVIDIA GPUs.☆204Updated last month
- onnxruntime-extensions: A specialized pre- and post- processing library for ONNX Runtime☆393Updated this week
- ☆28Updated last year
- Examples for using ONNX Runtime for model training.☆338Updated 7 months ago
- OpenVINO Tokenizers extension☆36Updated this week
- OpenVINO™ integration with TensorFlow☆179Updated 11 months ago
- Deep Learning Inference benchmark. Supports OpenVINO™ toolkit, TensorFlow, TensorFlow Lite, ONNX Runtime, OpenCV DNN, MXNet, PyTorch, Apa…☆32Updated this week
- Arm NN ML Software. The code here is a read-only mirror of https://review.mlplatform.org/admin/repos/ml/armnn☆1,268Updated this week