openvinotoolkit / openvinoLinks
OpenVINO™ is an open source toolkit for optimizing and deploying AI inference
☆8,889Updated last week
Alternatives and similar repositories for openvino
Users that are interested in openvino are comparing it to the libraries listed below
Sorting:
- Pre-trained Deep Learning models and demos (high quality and extremely fast)☆4,295Updated last week
- 📚 Jupyter notebook tutorials for OpenVINO™☆2,910Updated last week
- ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator☆17,966Updated this week
- NVIDIA® TensorRT™ is an SDK for high-performance deep learning inference on NVIDIA GPUs. This repository contains the open source compone…☆12,196Updated this week
- A collection of pre-trained, state-of-the-art models in the ONNX format☆9,044Updated 2 weeks ago
- Open deep learning compiler stack for cpu, gpu and specialized accelerators☆12,660Updated this week
- oneAPI Deep Neural Network Library (oneDNN)☆3,889Updated this week
- Open standard for machine learning interoperability☆19,639Updated last week
- Development repository for the Triton language and compiler☆17,069Updated this week
- PyTorch/TorchScript/FX compiler for NVIDIA GPUs using TensorRT☆2,865Updated this week
- Tutorials for creating and using ONNX models☆3,603Updated last year
- A machine learning compiler for GPUs, CPUs, and ML accelerators☆3,541Updated last week
- Simplify your onnx model☆4,186Updated last month
- The Triton Inference Server provides an optimized cloud and edge inferencing solution.☆9,829Updated this week
- A scalable inference server for models optimized with OpenVINO™☆768Updated this week
- A Python package for extending the official PyTorch that can easily obtain performance on Intel platform☆1,969Updated this week
- TNN: developed by Tencent Youtu Lab and Guangying Lab, a uniform deep learning inference framework for mobile、desktop and server. TNN is …☆4,582Updated 4 months ago
- SOTA low-bit LLM quantization (INT8/FP8/INT4/FP4/NF4) & sparsity; leading model compression techniques on TensorFlow, PyTorch, and ONNX R…☆2,500Updated this week
- Run Generative AI models with simple C++/Python API and using OpenVINO Runtime☆342Updated this week
- A GPU-accelerated library containing highly optimized building blocks and an execution engine for data processing to accelerate deep lear…☆5,523Updated this week
- ncnn is a high-performance neural network inference framework optimized for the mobile platform☆22,104Updated last week
- High-performance Inference and Deployment Toolkit for LLMs and VLMs based on PaddlePaddle☆3,521Updated this week
- ONNX-TensorRT: TensorRT backend for ONNX☆3,154Updated 3 weeks ago
- An easy to use PyTorch to TensorRT converter☆4,817Updated last year
- Convert TensorFlow, Keras, Tensorflow.js and Tflite models to ONNX☆2,480Updated 2 weeks ago
- 🛠A lite C++ AI toolkit: 100+ models with MNN, ORT and TRT, including Det, Seg, Stable-Diffusion, Face-Fusion, etc.🎉☆4,249Updated last month
- CV-CUDA™ is an open-source, GPU accelerated library for cloud-scale image processing and computer vision.☆2,583Updated 4 months ago
- High-efficiency floating-point neural network inference operators for mobile, server, and Web☆2,121Updated this week
- Implementation of paper - YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors☆13,987Updated last year
- 🐍 Geometric Computer Vision Library for Spatial AI☆10,774Updated this week