openvinotoolkit / workbenchLinks
☆28Updated 2 years ago
Alternatives and similar repositories for workbench
Users that are interested in workbench are comparing it to the libraries listed below
Sorting:
- A scalable inference server for models optimized with OpenVINO™☆816Updated this week
- Software Development Kit (SDK) for the Geti™ platform for Computer Vision AI model training.☆123Updated this week
- Train, Evaluate, Optimize, Deploy Computer Vision Models via OpenVINO™☆1,211Updated this week
- OpenVINO™ Explainable AI (XAI) Toolkit: Visual Explanation for OpenVINO Models☆36Updated 4 months ago
- Neural Network Compression Framework for enhanced OpenVINO™ inference☆1,115Updated last week
- Dataset Management Framework, a Python library and a CLI tool to build, analyze and manage Computer Vision datasets.☆655Updated this week
- Intel® AI Reference Models: contains Intel optimizations for running deep learning workloads on Intel® Xeon® Scalable processors and Inte…☆724Updated this week
- Repository for OpenVINO's extra modules☆161Updated last week
- Deep Learning Streamer (DL Streamer) Pipeline Framework is an open-source streaming media analytics framework, based on GStreamer* multim…☆569Updated this week
- OpenVINO™ integration with TensorFlow☆178Updated last year
- Run Generative AI models with simple C++/Python API and using OpenVINO Runtime☆414Updated this week
- The framework to generate a Dockerfile, build, test, and deploy a docker image with OpenVINO™ toolkit.☆68Updated last week
- OpenVINO Tokenizers extension☆46Updated this week
- Tools for easier OpenVINO development/debugging☆10Updated 6 months ago
- SOTA low-bit LLM quantization (INT8/FP8/MXFP8/INT4/MXFP4/NVFP4) & sparsity; leading model compression techniques on PyTorch, TensorFlow, …☆2,570Updated this week
- DeepStream SDK Python bindings and sample applications☆1,765Updated 3 months ago
- With OpenVINO Test Drive, users can run large language models (LLMs) and models trained by Intel Geti on their devices, including AI PCs …☆35Updated last month
- 🤗 Optimum Intel: Accelerate inference with Intel optimization tools☆528Updated this week
- Reference implementations of MLPerf® inference benchmarks☆1,513Updated this week
- Triton Python, C++ and Java client libraries, and GRPC-generated client examples for go, java and scala.☆672Updated last week
- Sample apps to demonstrate how to deploy models trained with TAO on DeepStream☆438Updated 2 months ago
- PyTorch/TorchScript/FX compiler for NVIDIA GPUs using TensorRT☆2,920Updated last week
- onnxruntime-extensions: A specialized pre- and post- processing library for ONNX Runtime☆434Updated last month
- Build computer vision models in a fraction of the time and with less data.☆437Updated this week
- ONNX-TensorRT: TensorRT backend for ONNX☆3,180Updated 2 months ago
- ONNX Optimizer☆790Updated this week
- Deep Learning Inference benchmark. Supports OpenVINO™ toolkit, TensorFlow, TensorFlow Lite, ONNX Runtime, OpenCV DNN, MXNet, PyTorch, Apa…☆34Updated last week
- Pre-trained Deep Learning models and demos (high quality and extremely fast)☆4,344Updated last week
- Run Computer Vision AI models with simple Python API and using OpenVINO Runtime☆59Updated last week
- ONNX Runtime: cross-platform, high performance scoring engine for ML models☆78Updated this week