openvinotoolkit / awesome-openvino
A curated list of OpenVINO based AI projects
☆117Updated last month
Alternatives and similar repositories for awesome-openvino:
Users that are interested in awesome-openvino are comparing it to the libraries listed below
- Run Generative AI models with simple C++/Python API and using OpenVINO Runtime☆200Updated this week
- OpenVINO Tokenizers extension☆28Updated this week
- Software Development Kit (SDK) for the Intel® Geti™ platform for Computer Vision AI model training.☆74Updated this week
- Repository for OpenVINO's extra modules☆112Updated 2 weeks ago
- Pre-built components and code samples to help you build and deploy production-grade AI applications with the OpenVINO™ Toolkit from Intel☆124Updated this week
- 🤗 Optimum Intel: Accelerate inference with Intel optimization tools☆436Updated this week
- An Awesome list of oneAPI projects☆134Updated last month
- GenAI components at micro-service level; GenAI service composer to create mega-service☆92Updated this week
- The no-code AI toolchain☆82Updated this week
- This repository contains Dockerfiles, scripts, yaml files, Helm charts, etc. used to scale out AI containers with versions of TensorFlow …☆35Updated this week
- A scalable inference server for models optimized with OpenVINO™☆701Updated this week
- The framework to generate a Dockerfile, build, test, and deploy a docker image with OpenVINO™ toolkit.☆63Updated last month
- High-performance, optimized pre-trained template AI application pipelines for systems using Hailo devices☆109Updated 3 weeks ago
- ☆94Updated this week
- OpenVINO NPU Plugin☆45Updated this week
- Easy and lightning fast training of 🤗 Transformers on Habana Gaudi processor (HPU)☆166Updated this week
- With OpenVINO Test Drive, users can run large language models (LLMs) and models trained by Intel Geti on their devices, including AI PCs …☆16Updated this week
- OpenVINO™ integration with TensorFlow☆179Updated 6 months ago
- An innovative library for efficient LLM inference via low-bit quantization☆352Updated 4 months ago
- Explainable AI Tooling (XAI). XAI is used to discover and explain a model's prediction in a way that is interpretable to the user. Releva…☆36Updated last month
- Evaluation, benchmark, and scorecard, targeting for performance on throughput and latency, accuracy on popular evaluation harness, safety…☆25Updated this week
- TensorRT Model Optimizer is a unified library of state-of-the-art model optimization techniques such as quantization, pruning, distillati…☆679Updated 3 weeks ago
- Accelerate your Gen AI with NVIDIA NIM and NVIDIA AI Workbench☆139Updated 2 weeks ago
- Intel® Extension for DeepSpeed* is an extension to DeepSpeed that brings feature support with SYCL kernels on Intel GPU(XPU) device. Note…☆59Updated last month
- OpenVINO™ Explainable AI (XAI) Toolkit: Visual Explanation for OpenVINO Models☆29Updated 3 months ago
- Home of Intel(R) Deep Learning Streamer Pipeline Server (formerly Video Analytics Serving)☆126Updated last year
- A utility library to help integrate Python applications with Metropolis Microservices for Jetson☆12Updated last month
- LiteRT is the new name for TensorFlow Lite (TFLite). While the name is new, it's still the same trusted, high-performance runtime for on-…☆253Updated this week
- Generative AI Examples is a collection of GenAI examples such as ChatQnA, Copilot, which illustrate the pipeline capabilities of the Open…☆339Updated this week
- Generative AI extensions for onnxruntime☆591Updated this week