openvinotoolkit / openvino_contribLinks
Repository for OpenVINO's extra modules
☆129Updated this week
Alternatives and similar repositories for openvino_contrib
Users that are interested in openvino_contrib are comparing it to the libraries listed below
Sorting:
- The framework to generate a Dockerfile, build, test, and deploy a docker image with OpenVINO™ toolkit.☆66Updated 3 weeks ago
- ONNX Runtime: cross-platform, high performance scoring engine for ML models☆65Updated this week
- Run Generative AI models with simple C++/Python API and using OpenVINO Runtime☆295Updated this week
- OpenVINO Tokenizers extension☆36Updated last week
- A curated list of OpenVINO based AI projects☆138Updated 2 weeks ago
- OpenVINO™ Explainable AI (XAI) Toolkit: Visual Explanation for OpenVINO Models☆32Updated 3 months ago
- Deep Learning Inference benchmark. Supports OpenVINO™ toolkit, TensorFlow, TensorFlow Lite, ONNX Runtime, OpenCV DNN, MXNet, PyTorch, Apa…☆32Updated last week
- Tools for easier OpenVINO development/debugging☆9Updated 3 months ago
- This repository is a home to Intel® Deep Learning Streamer (Intel® DL Streamer) Pipeline Framework. Pipeline Framework is a streaming med…☆562Updated last month
- A scalable inference server for models optimized with OpenVINO™☆739Updated this week
- A project demonstrating how to make DeepStream docker images.☆78Updated 6 months ago
- High-performance, optimized pre-trained template AI application pipelines for systems using Hailo devices☆143Updated 3 months ago
- Common utilities for ONNX converters☆272Updated 6 months ago
- C++ Helper Class for Deep Learning Inference Frameworks: TensorFlow Lite, TensorRT, OpenCV, OpenVINO, ncnn, MNN, SNPE, Arm NN, NNabla, ON…☆289Updated 3 years ago
- With OpenVINO Test Drive, users can run large language models (LLMs) and models trained by Intel Geti on their devices, including AI PCs …☆26Updated 2 weeks ago
- OpenVINO™ integration with TensorFlow☆179Updated 11 months ago
- OpenVINO ARM64 Notes☆49Updated 5 years ago
- How to deploy open source models using DeepStream and Triton Inference Server☆80Updated last year
- A project demonstrating how to use nvmetamux to run multiple models in parallel.☆102Updated 8 months ago
- OpenVINO backend for Triton.☆32Updated last week
- resize image in (CUDA, python, cupy)☆41Updated last year
- onnxruntime-extensions: A specialized pre- and post- processing library for ONNX Runtime☆394Updated this week
- Create a concurrent video analysis pipeline featuring multistream face and human pose detection, vehicle attribute detection, and the abi…☆52Updated 11 months ago
- This is implementation of YOLOv4,YOLOv4-relu,YOLOv4-tiny,YOLOv4-tiny-3l,Scaled-YOLOv4 and INT8 Quantization in OpenVINO2021.3☆239Updated 4 years ago
- C++ object detection inference from video or image input source☆78Updated this week
- deepstream 4.x samples to deploy TLT training models☆85Updated 5 years ago
- The Triton backend that allows running GPU-accelerated data pre-processing pipelines implemented in DALI's python API.☆135Updated 3 weeks ago
- A nvImageCodec library of GPU- and CPU- accelerated codecs featuring a unified interface☆109Updated 3 months ago
- This script converts the ONNX/OpenVINO IR model to Tensorflow's saved_model, tflite, h5, tfjs, tftrt(TensorRT), CoreML, EdgeTPU, ONNX and…☆342Updated 2 years ago
- An example of using DeepStream SDK for redaction☆209Updated last year