intel / edge-insights-visionLinks
Edge Insights for Vision (eiv) is a package that helps to auto install Intel® GPU drivers and setup environment for Inference application development using OpenVINO™ toolkit
☆20Updated 2 months ago
Alternatives and similar repositories for edge-insights-vision
Users that are interested in edge-insights-vision are comparing it to the libraries listed below
Sorting:
- Run Generative AI models with simple C++/Python API and using OpenVINO Runtime☆381Updated this week
- ☆142Updated last week
- AMD Ryzen™ AI Software includes the tools and runtime libraries for optimizing and deploying AI inference on AMD Ryzen™ AI powered PCs.☆698Updated 2 weeks ago
- OpenVINO Intel NPU Compiler☆73Updated 2 weeks ago
- Intel® NPU Acceleration Library☆700Updated 7 months ago
- Intel® NPU (Neural Processing Unit) Driver☆351Updated last week
- Tools for easier OpenVINO development/debugging☆10Updated 4 months ago
- 🤗 Optimum Intel: Accelerate inference with Intel optimization tools☆515Updated this week
- A curated list of OpenVINO based AI projects☆172Updated 5 months ago
- Intel® Tensor Processing Primitives extension for Pytorch*☆17Updated last week
- A Python package for extending the official PyTorch that can easily obtain performance on Intel platform☆1,993Updated last week
- Run generative AI models in sophgo BM1684X/BM1688☆254Updated this week
- Zhouyi model zoo (Maintained at https://github.com/Arm-China/Model_zoo)☆12Updated 11 months ago
- Intel® AI Reference Models: contains Intel optimizations for running deep learning workloads on Intel® Xeon® Scalable processors and Inte…☆721Updated this week
- Performance and diagnostic tools for Arm CMN on-chip interconnects☆16Updated 2 weeks ago
- MLPerf® Tiny is an ML benchmark suite for extremely low-power systems such as microcontrollers☆433Updated 3 months ago
- The Qualcomm® AI Hub Models are a collection of state-of-the-art machine learning models optimized for performance (latency, memory etc.)…☆849Updated 2 weeks ago
- Accelerate LLM with low-bit (FP4 / INT4 / FP8 / INT8) optimizations using ipex-llm☆169Updated 7 months ago
- Pre-built components and code samples to help you build and deploy production-grade AI applications with the OpenVINO™ Toolkit from Intel☆193Updated last week
- EEMBC's Machine-Learning Inference Benchmark targeted at edge devices.☆52Updated 3 years ago
- Tutorials for running models on First-gen Gaudi and Gaudi2 for Training and Inference. The source files for the tutorials on https://dev…☆62Updated 2 months ago
- Machine learning compiler based on MLIR for Sophgo TPU.☆824Updated last week
- GPU Stress Test is a tool to stress the compute engine of NVIDIA Tesla GPU’s by running a BLAS matrix multiply using different data types…☆114Updated 4 months ago
- ethos-u-vela is the ML model compiler tool and used to compile a TFLite-Micro model into an optimised version for ethos-u NPU on iMX93 pl…☆32Updated this week
- NVIDIA DLA-SW, the recipes and tools for running deep learning workloads on NVIDIA DLA cores for inference applications.☆220Updated last year
- Library for modelling performance costs of different Neural Network workloads on NPU devices☆34Updated 2 weeks ago
- OpenAI Triton backend for Intel® GPUs☆221Updated this week
- ☆1,096Updated 2 weeks ago
- A unified library of SOTA model optimization techniques like quantization, pruning, distillation, speculative decoding, etc. It compresse…☆1,605Updated this week
- CMSIS-NN Library☆334Updated last week