openvinotoolkit / workbench
☆28Updated last year
Alternatives and similar repositories for workbench:
Users that are interested in workbench are comparing it to the libraries listed below
- OpenVINO™ Explainable AI (XAI) Toolkit: Visual Explanation for OpenVINO Models☆32Updated 5 months ago
- A scalable inference server for models optimized with OpenVINO™☆708Updated this week
- Repository for OpenVINO's extra modules☆115Updated 3 weeks ago
- Software Development Kit (SDK) for the Intel® Geti™ platform for Computer Vision AI model training.☆77Updated this week
- Dataset Management Framework, a Python library and a CLI tool to build, analyze and manage Computer Vision datasets.☆575Updated this week
- Neural Network Compression Framework for enhanced OpenVINO™ inference☆978Updated this week
- Run Generative AI models with simple C++/Python API and using OpenVINO Runtime☆224Updated this week
- Train, Evaluate, Optimize, Deploy Computer Vision Models via OpenVINO™☆1,160Updated this week
- This repository is a home to Intel® Deep Learning Streamer (Intel® DL Streamer) Pipeline Framework. Pipeline Framework is a streaming med…☆543Updated last week
- The framework to generate a Dockerfile, build, test, and deploy a docker image with OpenVINO™ toolkit.☆65Updated 3 weeks ago
- Triton Python, C++ and Java client libraries, and GRPC-generated client examples for go, java and scala.☆603Updated 3 weeks ago
- 🤗 Optimum Intel: Accelerate inference with Intel optimization tools☆444Updated this week
- With OpenVINO Test Drive, users can run large language models (LLMs) and models trained by Intel Geti on their devices, including AI PCs …☆19Updated this week
- OpenVINO Tokenizers extension☆30Updated this week
- A curated list of OpenVINO based AI projects☆122Updated 2 months ago
- ☆30Updated this week
- Intel® AI Reference Models: contains Intel optimizations for running deep learning workloads on Intel® Xeon® Scalable processors and Inte…☆696Updated this week
- Triton Model Analyzer is a CLI tool to help with better understanding of the compute and memory requirements of the Triton Inference Serv…☆460Updated 2 weeks ago
- Home of Intel(R) Deep Learning Streamer Pipeline Server (formerly Video Analytics Serving)☆126Updated last year
- Sample videos for running inference☆286Updated 7 months ago
- oneAPI Deep Neural Network Library (oneDNN)☆19Updated this week
- Sample apps to demonstrate how to deploy models trained with TAO on DeepStream☆397Updated this week
- Practice git, Travis CI and Intel OpenVINO☆14Updated 3 years ago
- This repository contains tutorials and examples for Triton Inference Server☆656Updated this week
- ☆20Updated 8 months ago
- Common source, scripts and utilities for creating Triton backends.☆310Updated 3 weeks ago
- PyTriton is a Flask/FastAPI-like interface that simplifies Triton's deployment in Python environments.☆775Updated 2 weeks ago
- Intel® Extension for TensorFlow*☆332Updated last month
- onnxruntime-extensions: A specialized pre- and post- processing library for ONNX Runtime☆361Updated this week
- Tensorflow Backend for ONNX☆1,295Updated 11 months ago