openvinotoolkit / workbench
☆28Updated last year
Alternatives and similar repositories for workbench:
Users that are interested in workbench are comparing it to the libraries listed below
- OpenVINO™ Explainable AI (XAI) Toolkit: Visual Explanation for OpenVINO Models☆29Updated 3 months ago
- Software Development Kit (SDK) for the Intel® Geti™ platform for Computer Vision AI model training.☆74Updated this week
- Repository for OpenVINO's extra modules☆112Updated 2 weeks ago
- Run Generative AI models with simple C++/Python API and using OpenVINO Runtime☆200Updated this week
- A scalable inference server for models optimized with OpenVINO™☆701Updated this week
- With OpenVINO Test Drive, users can run large language models (LLMs) and models trained by Intel Geti on their devices, including AI PCs …☆16Updated this week
- The framework to generate a Dockerfile, build, test, and deploy a docker image with OpenVINO™ toolkit.☆63Updated last month
- A curated list of OpenVINO based AI projects☆117Updated last month
- Neural Network Compression Framework for enhanced OpenVINO™ inference☆968Updated this week
- Triton Python, C++ and Java client libraries, and GRPC-generated client examples for go, java and scala.☆590Updated this week
- Intel® AI Reference Models: contains Intel optimizations for running deep learning workloads on Intel® Xeon® Scalable processors and Inte…☆692Updated this week
- This repository is a home to Intel® Deep Learning Streamer (Intel® DL Streamer) Pipeline Framework. Pipeline Framework is a streaming med…☆541Updated last week
- OpenVINO Tokenizers extension☆28Updated this week
- Dataset Management Framework, a Python library and a CLI tool to build, analyze and manage Computer Vision datasets.☆563Updated this week
- ONNX Script enables developers to naturally author ONNX functions and models using a subset of Python.☆312Updated this week
- 🤗 Optimum Intel: Accelerate inference with Intel optimization tools☆436Updated this week
- Triton Model Analyzer is a CLI tool to help with better understanding of the compute and memory requirements of the Triton Inference Serv…☆448Updated 2 weeks ago
- Actively maintained ONNX Optimizer☆662Updated this week
- Common utilities for ONNX converters☆257Updated last month
- OpenVINO NPU Plugin☆45Updated this week
- onnxruntime-extensions: A specialized pre- and post- processing library for ONNX Runtime☆352Updated this week
- Model Analyzer is the Network Statistic Information tool☆12Updated 3 months ago
- Deep Learning Inference benchmark. Supports OpenVINO™ toolkit, TensorFlow, TensorFlow Lite, ONNX Runtime, OpenCV DNN, MXNet, PyTorch, Apa…☆27Updated last month
- OpenVINO™ integration with TensorFlow☆179Updated 6 months ago
- This script converts the ONNX/OpenVINO IR model to Tensorflow's saved_model, tflite, h5, tfjs, tftrt(TensorRT), CoreML, EdgeTPU, ONNX and…☆341Updated 2 years ago
- ☆29Updated this week
- ☆21Updated 6 months ago
- Pre-built components and code samples to help you build and deploy production-grade AI applications with the OpenVINO™ Toolkit from Intel☆124Updated this week
- Intel® AI for Enterprise RAG converts enterprise data into actionable insights with excellent TCO. Utilizing Intel Gaudi AI accelerators …☆12Updated last week
- Home of Intel(R) Deep Learning Streamer Pipeline Server (formerly Video Analytics Serving)☆126Updated last year