openvinotoolkit / openvino_testdrive
With OpenVINO Test Drive, users can run large language models (LLMs) and models trained by Intel Geti on their devices, including AI PCs and Edge devices.
☆16Updated this week
Alternatives and similar repositories for openvino_testdrive:
Users that are interested in openvino_testdrive are comparing it to the libraries listed below
- Run Generative AI models with simple C++/Python API and using OpenVINO Runtime☆200Updated this week
- Software Development Kit (SDK) for the Intel® Geti™ platform for Computer Vision AI model training.☆74Updated this week
- Repository for OpenVINO's extra modules☆112Updated 2 weeks ago
- OpenVINO™ Explainable AI (XAI) Toolkit: Visual Explanation for OpenVINO Models☆29Updated 3 months ago
- A curated list of OpenVINO based AI projects☆117Updated last month
- ☆29Updated this week
- ☆28Updated last year
- Pre-built components and code samples to help you build and deploy production-grade AI applications with the OpenVINO™ Toolkit from Intel☆124Updated this week
- 🤗 Optimum Intel: Accelerate inference with Intel optimization tools☆436Updated this week
- A scalable inference server for models optimized with OpenVINO™☆701Updated this week
- OpenVINO Tokenizers extension☆28Updated this week
- This repository is a home to Intel® Deep Learning Streamer (Intel® DL Streamer) Pipeline Framework. Pipeline Framework is a streaming med…☆541Updated last week
- Home of Intel(R) Deep Learning Streamer Pipeline Server (formerly Video Analytics Serving)☆126Updated last year
- The framework to generate a Dockerfile, build, test, and deploy a docker image with OpenVINO™ toolkit.☆63Updated last month
- OpenVINO NPU Plugin☆45Updated this week
- This repository serves as an example of deploying the YOLO models on Triton Server for performance and testing purposes☆52Updated 8 months ago
- Edge Insights for Vision (eiv) is a package that helps to auto install Intel® GPU drivers and setup environment for Inference application…☆18Updated 4 months ago
- Intel® NPU Acceleration Library☆593Updated 2 weeks ago
- C++ application to perform computer vision tasks using Nvidia Triton Server for model inference☆22Updated 2 weeks ago
- Neural Network Compression Framework for enhanced OpenVINO™ inference☆968Updated this week
- High-performance, optimized pre-trained template AI application pipelines for systems using Hailo devices☆109Updated 3 weeks ago
- onnxruntime-extensions: A specialized pre- and post- processing library for ONNX Runtime☆352Updated this week
- A project demonstrating how to use nvmetamux to run multiple models in parallel.☆98Updated 3 months ago
- YOLOv5 on Orin DLA☆189Updated 11 months ago
- Deep Learning Inference benchmark. Supports OpenVINO™ toolkit, TensorFlow, TensorFlow Lite, ONNX Runtime, OpenCV DNN, MXNet, PyTorch, Apa…☆27Updated last month
- OpenVINO™ integration with TensorFlow☆179Updated 6 months ago
- "FastSAM_Awsome_Openvino" 项目展示了如何通过 OpenVINO 框架高效部署 FastSAM 模型 ,实现了令人瞩目的实例分割功能。该项目提供了 C++ 版本和 Python 版本两种实现,为开发者提供了在不同语言环境下使用 FastSAM 模型的选…☆35Updated last year
- The YOLOv11 C++ TensorRT Project in C++ and optimized using NVIDIA TensorRT☆54Updated 3 months ago
- Provides an ensemble model to deploy a YoloV8 ONNX model to Triton☆33Updated last year
- This script converts the ONNX/OpenVINO IR model to Tensorflow's saved_model, tflite, h5, tfjs, tftrt(TensorRT), CoreML, EdgeTPU, ONNX and…☆341Updated 2 years ago