openvinotoolkit / openvino_notebooksLinks
📚 Jupyter notebook tutorials for OpenVINO™
☆2,804Updated this week
Alternatives and similar repositories for openvino_notebooks
Users that are interested in openvino_notebooks are comparing it to the libraries listed below
Sorting:
- OpenVINO™ is an open source toolkit for optimizing and deploying AI inference☆8,348Updated this week
- Run Generative AI models with simple C++/Python API and using OpenVINO Runtime☆277Updated this week
- Pre-trained Deep Learning models and demos (high quality and extremely fast)☆4,221Updated last week
- A scalable inference server for models optimized with OpenVINO™☆731Updated this week
- Examples for using ONNX Runtime for machine learning inferencing.☆1,385Updated last week
- Repository for OpenVINO's extra modules☆122Updated last week
- 🤗 Optimum Intel: Accelerate inference with Intel optimization tools☆466Updated this week
- A curated list of OpenVINO based AI projects☆132Updated 5 months ago
- Pre-built components and code samples to help you build and deploy production-grade AI applications with the OpenVINO™ Toolkit from Intel☆153Updated last week
- PyTorch/TorchScript/FX compiler for NVIDIA GPUs using TensorRT☆2,762Updated this week
- A Python package for extending the official PyTorch that can easily obtain performance on Intel platform☆1,857Updated this week
- Tools for easier OpenVINO development/debugging☆9Updated 2 months ago
- Convert TensorFlow, Keras, Tensorflow.js and Tflite models to ONNX☆2,426Updated 3 months ago
- The Qualcomm® AI Hub Models are a collection of state-of-the-art machine learning models optimized for performance (latency, memory etc.)…☆704Updated last week
- AITemplate is a Python framework which renders neural network into high performance CUDA/HIP C++ code. Specialized for FP16 TensorCore (N…☆4,640Updated 2 months ago
- Images to inference with no labeling (use foundation models to train supervised models).☆2,273Updated 2 weeks ago
- OpenVINO™ integration with TensorFlow☆179Updated 10 months ago
- NVIDIA® TensorRT™ is an SDK for high-performance deep learning inference on NVIDIA GPUs. This repository contains the open source compone…☆11,655Updated last week
- Olive: Simplify ML Model Finetuning, Conversion, Quantization, and Optimization for CPUs, GPUs and NPUs.☆1,939Updated last week
- SOTA low-bit LLM quantization (INT8/FP8/INT4/FP4/NF4) & sparsity; leading model compression techniques on TensorFlow, PyTorch, and ONNX R…☆2,419Updated this week
- YOLOv6: a single-stage object detection framework dedicated to industrial applications.☆5,807Updated 9 months ago
- ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator☆16,742Updated this week
- Generative AI extensions for onnxruntime☆722Updated this week
- Transformer related optimization, including BERT, GPT☆6,173Updated last year
- ONNX Optimizer☆715Updated this week
- Framework agnostic sliced/tiled inference + interactive ui + error analysis plots☆4,567Updated 2 weeks ago
- Supporting PyTorch models with the Google AI Edge TFLite runtime.☆595Updated last week
- This repository is a home to Intel® Deep Learning Streamer (Intel® DL Streamer) Pipeline Framework. Pipeline Framework is a streaming med…☆555Updated last week
- 🚀 Accelerate inference and training of 🤗 Transformers, Diffusers, TIMM and Sentence Transformers with easy to use hardware optimization…☆2,916Updated this week
- Simplify your onnx model☆4,088Updated 8 months ago