intel / edge-insights-vision
Edge Insights for Vision (eiv) is a package that helps to auto install Intel® GPU drivers and setup environment for Inference application development using OpenVINO™ toolkit
☆18Updated 7 months ago
Alternatives and similar repositories for edge-insights-vision
Users that are interested in edge-insights-vision are comparing it to the libraries listed below
Sorting:
- Run Generative AI models with simple C++/Python API and using OpenVINO Runtime☆274Updated this week
- OpenVINO Intel NPU Compiler☆50Updated this week
- 🤗 Optimum Intel: Accelerate inference with Intel optimization tools☆464Updated this week
- Tools for easier OpenVINO development/debugging☆9Updated last month
- With OpenVINO Test Drive, users can run large language models (LLMs) and models trained by Intel Geti on their devices, including AI PCs …☆24Updated last week
- A curated list of OpenVINO based AI projects☆132Updated 4 months ago
- Intel® NPU Acceleration Library☆671Updated 3 weeks ago
- OpenVINO™ integration with TensorFlow☆179Updated 10 months ago
- ☆108Updated last month
- Repository for OpenVINO's extra modules☆121Updated last week
- NVIDIA DLA-SW, the recipes and tools for running deep learning workloads on NVIDIA DLA cores for inference applications.☆195Updated 11 months ago
- A scalable inference server for models optimized with OpenVINO™☆723Updated this week
- Run generative AI models in sophgo BM1684X/BM1688☆208Updated this week
- Accelerate LLM with low-bit (FP4 / INT4 / FP8 / INT8) optimizations using ipex-llm☆164Updated 2 weeks ago
- ☆784Updated this week
- Pre-built components and code samples to help you build and deploy production-grade AI applications with the OpenVINO™ Toolkit from Intel☆144Updated last week
- Intel® AI Reference Models: contains Intel optimizations for running deep learning workloads on Intel® Xeon® Scalable processors and Inte…☆714Updated this week
- llm-export can export llm model to onnx.☆289Updated 3 months ago
- OpenVINO Tokenizers extension☆33Updated this week
- This repository contains the results and code for the MLPerf™ Inference v2.0 benchmark.☆9Updated last year
- ☆514Updated 2 weeks ago
- Common utilities for ONNX converters☆268Updated 5 months ago
- ☆726Updated last year
- Neural Network Compression Framework for enhanced OpenVINO™ inference☆1,009Updated this week
- A high-throughput and memory-efficient inference and serving engine for LLMs☆49Updated 6 months ago
- PaddlePaddle custom device implementaion. (『飞桨』自定义硬件接入实现)☆83Updated this week
- Intel® NPU (Neural Processing Unit) Driver☆254Updated this week
- Intel® Tensor Processing Primitives extension for Pytorch*☆17Updated this week
- ☆988Updated last year
- A tutorial for getting started with the Deep Learning Accelerator (DLA) on NVIDIA Jetson☆332Updated 2 years ago