intel / edge-insights-vision
Edge Insights for Vision (eiv) is a package that helps to auto install Intel® GPU drivers and setup environment for Inference application development using OpenVINO™ toolkit
☆17Updated this week
Related projects: ⓘ
- Run Generative AI models using native OpenVINO C++ API☆107Updated this week
- Pre-built components and code samples to help you build and deploy production-grade AI applications with the OpenVINO™ Toolkit from Intel☆30Updated this week
- A curated list of OpenVINO based AI projects☆92Updated 3 weeks ago
- ☆44Updated this week
- 🤗 Optimum Intel: Accelerate inference with Intel optimization tools☆380Updated this week
- ☆17Updated this week
- A reference application for a local AI assistant with LLM and RAG☆82Updated 2 months ago
- ☆74Updated this week
- Intel® NPU Acceleration Library☆439Updated this week
- OpenVINO Tokenizers extension☆20Updated this week
- Generative AI extensions for onnxruntime☆421Updated this week
- Large Language Model Text Generation Inference on Habana Gaudi☆24Updated last week
- NVIDIA DLA-SW, the recipes and tools for running deep learning workloads on NVIDIA DLA cores for inference applications.☆174Updated 3 months ago
- The Hailo Model Zoo includes pre-trained models and a full building and evaluation environment☆246Updated this week
- Repository for OpenVINO's extra modules☆103Updated 3 weeks ago
- OpenVINO NPU Plugin☆33Updated last week
- High-performance, optimized pre-trained template AI application pipelines for systems using Hailo devices☆75Updated this week
- OpenVINO™ integration with TensorFlow☆178Updated 2 months ago
- ☆331Updated 4 months ago
- An innovative library for efficient LLM inference via low-bit quantization☆342Updated 2 weeks ago
- The jetson-examples repository by Seeed Studio offers a seamless, one-line command deployment to run vision AI and Generative AI models o…☆65Updated this week
- Boot NVIDIA Nano Jetson Developer Kit from a mass storage USB device (Jetson Nano devices A02, B01, 2GB and possibly Jetson TX1)☆130Updated 3 years ago
- A Python package for extending the official PyTorch that can easily obtain performance on Intel platform☆1,554Updated this week
- TensorRT Model Optimizer is a unified library of state-of-the-art model optimization techniques such as quantization, sparsity, distillat…☆439Updated this week
- This repository contains Dockerfiles, scripts, yaml files, Helm charts, etc. used to scale out AI containers with versions of TensorFlow …☆23Updated this week
- A scalable inference server for models optimized with OpenVINO™☆657Updated this week
- GenAI components at micro-service level; GenAI service composer to create mega-service☆46Updated this week
- Generative AI Examples is a collection of GenAI examples such as ChatQnA, Copilot, which illustrate the pipeline capabilities of the Open…☆220Updated this week
- The framework to generate a Dockerfile, build, test, and deploy a docker image with OpenVINO™ toolkit.☆58Updated last week