openvinotoolkit / openvinoLinks
OpenVINO™ is an open source toolkit for optimizing and deploying AI inference
☆9,546Updated this week
Alternatives and similar repositories for openvino
Users that are interested in openvino are comparing it to the libraries listed below
Sorting:
- Pre-trained Deep Learning models and demos (high quality and extremely fast)☆4,344Updated 2 weeks ago
- 📚 Jupyter notebook tutorials for OpenVINO™☆3,012Updated this week
- NVIDIA® TensorRT™ is an SDK for high-performance deep learning inference on NVIDIA GPUs. This repository contains the open source compone…☆12,628Updated last month
- Neural Network Compression Framework for enhanced OpenVINO™ inference☆1,119Updated this week
- The Triton Inference Server provides an optimized cloud and edge inferencing solution.☆10,245Updated last week
- ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator☆19,050Updated this week
- Simplify your onnx model☆4,280Updated this week
- oneAPI Deep Neural Network Library (oneDNN)☆3,957Updated this week
- A Python package for extending the official PyTorch that can easily obtain performance on Intel platform☆2,005Updated this week
- Open standard for machine learning interoperability☆20,172Updated this week
- A collection of pre-trained, state-of-the-art models in the ONNX format☆9,344Updated 4 months ago
- CV-CUDA™ is an open-source, GPU accelerated library for cloud-scale image processing and computer vision.☆2,645Updated 2 months ago
- Open Machine Learning Compiler Framework☆13,045Updated this week
- PyTorch/TorchScript/FX compiler for NVIDIA GPUs using TensorRT☆2,928Updated this week
- Serve, optimize and scale PyTorch models in production☆4,356Updated 5 months ago
- ONNX-TensorRT: TensorRT backend for ONNX☆3,180Updated 2 months ago
- Annotate better with CVAT, the industry-leading data engine for machine learning. Used and trusted by teams at any scale, for data of any…☆15,142Updated last week
- Visualizer for neural network, deep learning and machine learning models☆32,236Updated this week
- Run Generative AI models with simple C++/Python API and using OpenVINO Runtime☆419Updated this week
- A GPU-accelerated library containing highly optimized building blocks and an execution engine for data processing to accelerate deep lear…☆5,600Updated last week
- SOTA low-bit LLM quantization (INT8/FP8/MXFP8/INT4/MXFP4/NVFP4) & sparsity; leading model compression techniques on PyTorch, TensorFlow, …☆2,570Updated last week
- A scalable inference server for models optimized with OpenVINO™☆820Updated this week
- OneFlow is a deep learning framework designed to be user-friendly, scalable and efficient.☆9,387Updated last month
- High-performance Inference and Deployment Toolkit for LLMs and VLMs based on PaddlePaddle☆3,630Updated this week
- An easy to use PyTorch to TensorRT converter☆4,845Updated last year
- Tengine is a lite, high performance, modular inference engine for embedded device☆4,506Updated 10 months ago
- ncnn is a high-performance neural network inference framework optimized for the mobile platform☆22,647Updated this week
- High-efficiency floating-point neural network inference operators for mobile, server, and Web☆2,237Updated this week
- Convert TensorFlow, Keras, Tensorflow.js and Tflite models to ONNX☆2,510Updated 4 months ago
- Tutorials for creating and using ONNX models☆3,651Updated last year