slyalin / openvino_devtoolsLinks
Tools for easier OpenVINO development/debugging
☆9Updated 3 weeks ago
Alternatives and similar repositories for openvino_devtools
Users that are interested in openvino_devtools are comparing it to the libraries listed below
Sorting:
- Run Generative AI models with simple C++/Python API and using OpenVINO Runtime☆316Updated this week
- Neural Network Compression Framework for enhanced OpenVINO™ inference☆1,070Updated this week
- 🤗 Optimum Intel: Accelerate inference with Intel optimization tools☆481Updated this week
- OpenVINO Tokenizers extension☆38Updated last week
- ☆8Updated 11 months ago
- A Python package for extending the official PyTorch that can easily obtain performance on Intel platform☆1,921Updated last week
- Intel® Tensor Processing Primitives extension for Pytorch*☆17Updated this week
- OpenVINO Intel NPU Compiler☆62Updated last week
- With OpenVINO Test Drive, users can run large language models (LLMs) and models trained by Intel Geti on their devices, including AI PCs …☆29Updated this week
- Intel® NPU Acceleration Library☆680Updated 3 months ago
- OpenAI Triton backend for Intel® GPUs☆197Updated this week
- ☆119Updated 2 weeks ago
- ☆20Updated last year
- Repository for OpenVINO's extra modules☆134Updated last week
- A parser, editor and profiler tool for ONNX models.☆447Updated this week
- A CUTLASS implementation using SYCL☆32Updated this week
- Intel® Extension for DeepSpeed* is an extension to DeepSpeed that brings feature support with SYCL kernels on Intel GPU(XPU) device. Note…☆61Updated last month
- ONNX Optimizer☆737Updated this week
- ☆62Updated 7 months ago
- SOTA low-bit LLM quantization (INT8/FP8/INT4/FP4/NF4) & sparsity; leading model compression techniques on TensorFlow, PyTorch, and ONNX R…☆2,465Updated this week
- Generative AI extensions for onnxruntime☆783Updated this week
- ☆28Updated last year
- ONNX Script enables developers to naturally author ONNX functions and models using a subset of Python.☆369Updated this week
- A scalable inference server for models optimized with OpenVINO™☆746Updated this week
- Intel® AI Reference Models: contains Intel optimizations for running deep learning workloads on Intel® Xeon® Scalable processors and Inte…☆717Updated last week
- A unified library of state-of-the-art model optimization techniques like quantization, pruning, distillation, speculative decoding, etc. …☆1,093Updated this week
- ☆17Updated last week
- Examples for using ONNX Runtime for machine learning inferencing.☆1,443Updated last week
- Machine learning compiler based on MLIR for Sophgo TPU.☆766Updated last week
- Library for modelling performance costs of different Neural Network workloads on NPU devices☆34Updated this week