slyalin / openvino_devtoolsLinks
Tools for easier OpenVINO development/debugging
☆10Updated 6 months ago
Alternatives and similar repositories for openvino_devtools
Users that are interested in openvino_devtools are comparing it to the libraries listed below
Sorting:
- Run Generative AI models with simple C++/Python API and using OpenVINO Runtime☆428Updated this week
- 🤗 Optimum Intel: Accelerate inference with Intel optimization tools☆532Updated this week
- OpenVINO Tokenizers extension☆48Updated this week
- Neural Network Compression Framework for enhanced OpenVINO™ inference☆1,123Updated this week
- A Python package for extending the official PyTorch that can easily obtain performance on Intel platform☆2,010Updated this week
- Intel® NPU Acceleration Library☆703Updated 9 months ago
- ☆152Updated last month
- OpenVINO Intel NPU Compiler☆81Updated last week
- A scalable inference server for models optimized with OpenVINO™☆823Updated this week
- Generative AI extensions for onnxruntime☆953Updated this week
- OpenAI Triton backend for Intel® GPUs☆226Updated this week
- Repository for OpenVINO's extra modules☆163Updated this week
- Intel® Tensor Processing Primitives extension for Pytorch*☆18Updated 3 weeks ago
- SYCL* Templates for Linear Algebra (SYCL*TLA) - SYCL based CUTLASS implementation for Intel GPUs☆66Updated last week
- ☆437Updated 4 months ago
- ☆112Updated 3 weeks ago
- Intel® Extension for DeepSpeed* is an extension to DeepSpeed that brings feature support with SYCL kernels on Intel GPU(XPU) device. Note…☆64Updated 7 months ago
- AITemplate is a Python framework which renders neural network into high performance CUDA/HIP C++ code. Specialized for FP16 TensorCore (N…☆12Updated last year
- ONNX Script enables developers to naturally author ONNX functions and models using a subset of Python.☆420Updated last week
- ☆61Updated last year
- With OpenVINO Test Drive, users can run large language models (LLMs) and models trained by Intel Geti on their devices, including AI PCs …☆37Updated last month
- Common utilities for ONNX converters☆294Updated last month
- Universal cross-platform tokenizers binding to HF and sentencepiece☆451Updated 2 weeks ago
- ☆21Updated last year
- Olive: Simplify ML Model Finetuning, Conversion, Quantization, and Optimization for CPUs, GPUs and NPUs.☆2,249Updated this week
- A high-throughput and memory-efficient inference and serving engine for LLMs☆85Updated this week
- OpenVINO™ Explainable AI (XAI) Toolkit: Visual Explanation for OpenVINO Models☆36Updated 4 months ago
- A parser, editor and profiler tool for ONNX models.☆478Updated 3 months ago
- Experimental projects related to TensorRT☆118Updated last week
- Profiling Tools Interfaces for GPU (PTI for GPU) is a set of Getting Started Documentation and Tools Library to start performance analysi…☆258Updated 2 weeks ago