intel / edge-insights-vision
Edge Insights for Vision (eiv) is a package that helps to auto install Intel® GPU drivers and setup environment for Inference application development using OpenVINO™ toolkit
☆18Updated 5 months ago
Alternatives and similar repositories for edge-insights-vision:
Users that are interested in edge-insights-vision are comparing it to the libraries listed below
- Run Generative AI models with simple C++/Python API and using OpenVINO Runtime☆224Updated this week
- A curated list of OpenVINO based AI projects☆122Updated 2 months ago
- 🤗 Optimum Intel: Accelerate inference with Intel optimization tools☆444Updated this week
- Pre-built components and code samples to help you build and deploy production-grade AI applications with the OpenVINO ™ Toolkit from Intel☆128Updated this week
- OpenVINO NPU Plugin☆47Updated last month
- With OpenVINO Test Drive, users can run large language models (LLMs) and models trained by Intel Geti on their devices, including AI PCs …☆19Updated this week
- Accelerate LLM with low-bit (FP4 / INT4 / FP8 / INT8) optimizations using ipex-llm☆161Updated 7 months ago
- Intel® NPU Acceleration Library☆634Updated last month
- OpenVINO Tokenizers extension☆30Updated this week
- ☆640Updated 3 weeks ago
- PyTorch installation wheels for Jetson Nano☆107Updated last year
- ☆93Updated 5 months ago
- ☆100Updated last month
- Intel® AI Reference Models: contains Intel optimizations for running deep learning workloads on Intel® Xeon® Scalable processors and Inte…☆696Updated this week
- OpenVINO LLM Benchmark☆11Updated last year
- Repository for OpenVINO's extra modules☆115Updated 3 weeks ago
- Intel® Extension for TensorFlow*☆332Updated last month
- Software Development Kit (SDK) for the Intel® Geti™ platform for Computer Vision AI model training.☆77Updated this week
- A Python package for extending the official PyTorch that can easily obtain performance on Intel platform☆1,755Updated this week
- Intel® NPU (Neural Processing Unit) Driver☆229Updated this week
- A high-throughput and memory-efficient inference and serving engine for LLMs☆56Updated this week
- Run cloud native workloads on NVIDIA GPUs☆160Updated this week
- A reference application for a local AI assistant with LLM and RAG☆106Updated 2 months ago
- Quick verify if intel discrete GPUs have been setup correctly☆9Updated 9 months ago
- Generative AI extensions for onnxruntime☆632Updated this week
- High-performance, optimized pre-trained template AI application pipelines for systems using Hailo devices☆115Updated 2 months ago
- A scalable inference server for models optimized with OpenVINO™☆708Updated this week
- Large Language Model Text Generation Inference on Habana Gaudi☆33Updated this week
- NVIDIA DLA-SW, the recipes and tools for running deep learning workloads on NVIDIA DLA cores for inference applications.☆189Updated 8 months ago
- The jetson-examples repository by Seeed Studio offers a seamless, one-line command deployment to run vision AI and Generative AI models o…☆153Updated 3 weeks ago