openvinotoolkit / workbenchLinks
☆28Updated 2 years ago
Alternatives and similar repositories for workbench
Users that are interested in workbench are comparing it to the libraries listed below
Sorting:
- A scalable inference server for models optimized with OpenVINO™☆775Updated this week
- OpenVINO™ Explainable AI (XAI) Toolkit: Visual Explanation for OpenVINO Models☆33Updated last month
- Repository for OpenVINO's extra modules☆144Updated 2 weeks ago
- Neural Network Compression Framework for enhanced OpenVINO™ inference☆1,091Updated last week
- Dataset Management Framework, a Python library and a CLI tool to build, analyze and manage Computer Vision datasets.☆647Updated this week
- OpenVINO Tokenizers extension☆42Updated this week
- DL Streamer is now part of Open Edge Platform, for latest updates and releases please visit new repo: https://github.com/open-edge-platfo…☆564Updated 3 months ago
- Run Generative AI models with simple C++/Python API and using OpenVINO Runtime☆364Updated last week
- The framework to generate a Dockerfile, build, test, and deploy a docker image with OpenVINO™ toolkit.☆67Updated last month
- OpenVINO™ integration with TensorFlow☆178Updated last year
- Software Development Kit (SDK) for the Intel® Geti™ platform for Computer Vision AI model training.☆122Updated this week
- Intel® AI Reference Models: contains Intel optimizations for running deep learning workloads on Intel® Xeon® Scalable processors and Inte…☆719Updated last month
- With OpenVINO Test Drive, users can run large language models (LLMs) and models trained by Intel Geti on their devices, including AI PCs …☆34Updated last month
- onnxruntime-extensions: A specialized pre- and post- processing library for ONNX Runtime☆418Updated last week
- 🤗 Optimum Intel: Accelerate inference with Intel optimization tools☆502Updated this week
- Sample apps to demonstrate how to deploy models trained with TAO on DeepStream☆433Updated last month
- SOTA low-bit LLM quantization (INT8/FP8/INT4/FP4/NF4) & sparsity; leading model compression techniques on TensorFlow, PyTorch, and ONNX R…☆2,517Updated this week
- A Python package for extending the official PyTorch that can easily obtain performance on Intel platform☆1,983Updated this week
- ONNX Optimizer☆768Updated this week
- 📚 Jupyter notebook tutorials for OpenVINO™☆2,924Updated last week
- ONNX Runtime: cross-platform, high performance scoring engine for ML models☆72Updated this week
- Examples for using ONNX Runtime for machine learning inferencing.☆1,516Updated last week
- Model Analyzer is the Network Statistic Information tool☆13Updated 7 months ago
- Examples for using ONNX Runtime for model training.☆351Updated last year
- Pre-built components and code samples to help you build and deploy production-grade AI applications with the OpenVINO™ Toolkit from Intel☆181Updated 2 weeks ago
- PyTriton is a Flask/FastAPI-like interface that simplifies Triton's deployment in Python environments.☆823Updated 2 months ago
- Sample videos for running inference☆312Updated last year
- Deep Learning Inference benchmark. Supports OpenVINO™ toolkit, TensorFlow, TensorFlow Lite, ONNX Runtime, OpenCV DNN, MXNet, PyTorch, Apa…☆33Updated 2 months ago
- DeepStream SDK Python bindings and sample applications☆1,708Updated 2 weeks ago
- Samples for TensorRT/Deepstream for Tesla & Jetson☆1,252Updated 2 weeks ago