openvinotoolkit / openvino_testdriveLinks
With OpenVINO Test Drive, users can run large language models (LLMs) and models trained by Intel Geti on their devices, including AI PCs and Edge devices.
☆29Updated last week
Alternatives and similar repositories for openvino_testdrive
Users that are interested in openvino_testdrive are comparing it to the libraries listed below
Sorting:
- Run Generative AI models with simple C++/Python API and using OpenVINO Runtime☆316Updated this week
- A curated list of OpenVINO based AI projects☆146Updated last month
- OpenVINO Tokenizers extension☆38Updated last week
- Repository for OpenVINO's extra modules☆134Updated 2 weeks ago
- 🤗 Optimum Intel: Accelerate inference with Intel optimization tools☆481Updated this week
- The framework to generate a Dockerfile, build, test, and deploy a docker image with OpenVINO™ toolkit.☆67Updated last month
- OpenVINO™ Explainable AI (XAI) Toolkit: Visual Explanation for OpenVINO Models☆32Updated 4 months ago
- A Toolkit to Help Optimize Onnx Model☆189Updated this week
- Build computer vision models in a fraction of the time and with less data.☆356Updated this week
- A scalable inference server for models optimized with OpenVINO™☆746Updated this week
- Pre-built components and code samples to help you build and deploy production-grade AI applications with the OpenVINO™ Toolkit from Intel☆163Updated this week
- oneAPI Deep Neural Network Library (oneDNN)☆20Updated last week
- Software Development Kit (SDK) for the Intel® Geti™ platform for Computer Vision AI model training.☆117Updated this week
- Tools for easier OpenVINO development/debugging☆9Updated 3 weeks ago
- ONNX Runtime: cross-platform, high performance scoring engine for ML models☆69Updated this week
- DL Streamer is now part of Open Edge Platform, for latest updates and releases please visit new repo: https://github.com/open-edge-platfo…☆564Updated 3 weeks ago
- Neural Network Compression Framework for enhanced OpenVINO™ inference☆1,070Updated this week
- Use safetensors with ONNX 🤗☆69Updated last month
- C++ pipeline with OpenVINO native API for Stable Diffusion v1.5☆13Updated last year
- Intel® NPU Acceleration Library☆682Updated 3 months ago
- An innovative library for efficient LLM inference via low-bit quantization☆349Updated 11 months ago
- Passively collect images for computer vision datasets on the edge.☆35Updated last year
- ☆84Updated 2 years ago
- No-code CLI designed for accelerating ONNX workflows☆207Updated last month
- ☆25Updated 4 months ago
- LiteRT continues the legacy of TensorFlow Lite as the trusted, high-performance runtime for on-device AI. Now with LiteRT Next, we're exp…☆688Updated this week
- A Toolkit to Help Optimize Large Onnx Model☆157Updated last year
- awesome AI models with NCNN, and how they were converted ✨✨✨☆274Updated 2 years ago
- A project demonstrating how to make DeepStream docker images.☆80Updated 8 months ago
- onnxruntime-extensions: A specialized pre- and post- processing library for ONNX Runtime☆405Updated this week