slyalin / openvino_devtoolsLinks
Tools for easier OpenVINO development/debugging
☆9Updated 3 months ago
Alternatives and similar repositories for openvino_devtools
Users that are interested in openvino_devtools are comparing it to the libraries listed below
Sorting:
- Run Generative AI models with simple C++/Python API and using OpenVINO Runtime☆295Updated this week
- OpenVINO Tokenizers extension☆36Updated last week
- 🤗 Optimum Intel: Accelerate inference with Intel optimization tools☆473Updated last week
- Neural Network Compression Framework for enhanced OpenVINO™ inference☆1,051Updated this week
- Repository for OpenVINO's extra modules☆129Updated this week
- With OpenVINO Test Drive, users can run large language models (LLMs) and models trained by Intel Geti on their devices, including AI PCs …☆26Updated 2 weeks ago
- OpenVINO Intel NPU Compiler☆58Updated last week
- OpenVINO™ Explainable AI (XAI) Toolkit: Visual Explanation for OpenVINO Models☆32Updated 3 months ago
- Software Development Kit (SDK) for the Intel® Geti™ platform for Computer Vision AI model training.☆117Updated this week
- ☆20Updated 11 months ago
- A scalable inference server for models optimized with OpenVINO™☆739Updated this week
- ☆113Updated 2 months ago
- OpenAI Triton backend for Intel® GPUs☆191Updated this week
- ☆8Updated 9 months ago
- A curated list of OpenVINO based AI projects☆138Updated 2 weeks ago
- ☆28Updated last year
- A parser, editor and profiler tool for ONNX models.☆442Updated 2 weeks ago
- Common utilities for ONNX converters☆272Updated 6 months ago
- Intel® AI Reference Models: contains Intel optimizations for running deep learning workloads on Intel® Xeon® Scalable processors and Inte…☆718Updated this week
- ☆62Updated 6 months ago
- Run Computer Vision AI models with simple C++/Python API and using OpenVINO Runtime☆54Updated last week
- Profiling Tools Interfaces for GPU (PTI for GPU) is a set of Getting Started Documentation and Tools Library to start performance analysi…☆228Updated 3 weeks ago
- oneAPI Level Zero Specification Headers and Loader☆266Updated this week
- A high-throughput and memory-efficient inference and serving engine for LLMs☆76Updated this week
- OpenVINO LLM Benchmark☆11Updated last year
- Intel® NPU Acceleration Library☆680Updated 2 months ago
- ☆15Updated 3 weeks ago
- An innovative library for efficient LLM inference via low-bit quantization☆349Updated 9 months ago
- Advanced Quantization Algorithm for LLMs and VLMs, with support for CPU, Intel GPU, CUDA and HPU. Seamlessly integrated with Torchao, Tra…☆525Updated this week
- Easy and lightning fast training of 🤗 Transformers on Habana Gaudi processor (HPU)☆188Updated this week