A scalable inference server for models optimized with OpenVINO™
☆839Mar 16, 2026Updated this week
Alternatives and similar repositories for model_server
Users that are interested in model_server are comparing it to the libraries listed below
Sorting:
- Inference Model Manager for Kubernetes☆46Apr 10, 2019Updated 6 years ago
- OpenVINO operator for OpenShift and Kubernetes☆20Jan 9, 2026Updated 2 months ago
- Intel® AI Reference Models: contains Intel optimizations for running deep learning workloads on Intel® Xeon® Scalable processors and Inte…☆730Feb 11, 2026Updated last month
- OpenVINO™ is an open source toolkit for optimizing and deploying AI inference☆9,880Updated this week
- Pre-trained Deep Learning models and demos (high quality and extremely fast)☆4,368Feb 20, 2026Updated last month
- A multi-user, distributed computing environment for running DL model training experiments on Intel® Xeon® Scalable processor-based system…☆392May 10, 2024Updated last year
- Train, Evaluate, Optimize, Deploy Computer Vision Models via OpenVINO™☆1,218Updated this week
- Deep Learning Streamer (DL Streamer) Pipeline Framework is an open-source streaming media analytics framework, based on GStreamer* multim…☆588Updated this week
- OpenVINO Tokenizers extension☆49Mar 12, 2026Updated last week
- Run Generative AI models with simple C++/Python API and using OpenVINO Runtime☆465Updated this week
- YoloV3/tiny-YoloV3+RaspberryPi3/Ubuntu LaptopPC+NCS/NCS2+USB Camera+Python+OpenVINO☆539Feb 27, 2022Updated 4 years ago
- The Triton Inference Server provides an optimized cloud and edge inferencing solution.☆10,426Mar 13, 2026Updated last week
- Learn about the workflow using Intel® Distribution of OpenVINO™ toolkit to accelerate vision, automatic speech recognition, natural langu…☆302Jul 22, 2024Updated last year
- Neural Network Compression Framework for enhanced OpenVINO™ inference☆1,137Updated this week
- A curated list of OpenVINO based AI projects☆186Jun 30, 2025Updated 8 months ago
- OpenVINO backend for Triton.☆37Mar 10, 2026Updated last week
- 📚 Jupyter notebook tutorials for OpenVINO™☆3,055Mar 13, 2026Updated last week
- Openvino environment with docker☆69Oct 21, 2020Updated 5 years ago
- ☆234Dec 8, 2022Updated 3 years ago
- The framework to generate a Dockerfile, build, test, and deploy a docker image with OpenVINO™ toolkit.☆71Feb 25, 2026Updated 3 weeks ago
- Experiments API for Experiment Tracking on Kubernetes☆27Jan 3, 2023Updated 3 years ago
- 🤗 Optimum Intel: Accelerate inference with Intel optimization tools☆549Updated this week
- With OpenVINO Test Drive, users can run large language models (LLMs) and models trained by Intel Geti on their devices, including AI PCs …☆37Mar 12, 2026Updated last week
- Python wrapper class for OpenVINO Model Server. User can submit inference request to OVMS with just a few lines of code.☆10Jan 16, 2022Updated 4 years ago
- Bridge to connect nGraph with TensorFlow☆52Jan 3, 2023Updated 3 years ago
- Describes how to run DBFace, a real-time, single-shot face detection model on Intel OpenVINO☆29Aug 23, 2020Updated 5 years ago
- OpenVINO™ Security Add-on to control access to inferencing models.☆18Oct 28, 2024Updated last year
- The smart city reference pipeline shows how to integrate various media building blocks, with analytics powered by the OpenVINO™ Toolkit, …☆215May 5, 2025Updated 10 months ago
- nGraph has moved to OpenVINO☆1,341Oct 15, 2020Updated 5 years ago
- A Python package for extending the official PyTorch that can easily obtain performance on Intel platform☆2,014Mar 13, 2026Updated last week
- ☆56Sep 29, 2020Updated 5 years ago
- Repository for OpenVINO's extra modules☆170Feb 26, 2026Updated 3 weeks ago
- oneAPI Deep Neural Network Library (oneDNN)☆3,964Updated this week
- Run Computer Vision AI models with simple Python API and using OpenVINO Runtime☆62Mar 8, 2026Updated last week
- Standardized Distributed Generative and Predictive AI Inference Platform for Scalable, Multi-Framework Deployment on Kubernetes☆5,216Updated this week
- Deep Learning Inference benchmark. Supports OpenVINO™ toolkit, TensorFlow, TensorFlow Lite, ONNX Runtime, OpenCV DNN, MXNet, PyTorch, Apa…☆35Updated this week
- SOTA low-bit LLM quantization (INT8/FP8/MXFP8/INT4/MXFP4/NVFP4) & sparsity; leading model compression techniques on PyTorch, TensorFlow, …☆2,597Mar 13, 2026Updated last week
- NVIDIA® TensorRT™ is an SDK for high-performance deep learning inference on NVIDIA GPUs. This repository contains the open source compone…☆12,800Mar 9, 2026Updated last week
- Multi Model Server is a tool for serving neural net models for inference☆1,025May 20, 2024Updated last year