openvinotoolkit / openvino
OpenVINO™ is an open source toolkit for optimizing and deploying AI inference
☆8,242Updated this week
Alternatives and similar repositories for openvino:
Users that are interested in openvino are comparing it to the libraries listed below
- Convert TensorFlow, Keras, Tensorflow.js and Tflite models to ONNX☆2,409Updated 3 months ago
- Pre-trained Deep Learning models and demos (high quality and extremely fast)☆4,203Updated 3 weeks ago
- oneAPI Deep Neural Network Library (oneDNN)☆3,783Updated this week
- Simplify your onnx model☆4,065Updated 8 months ago
- 📚 Jupyter notebook tutorials for OpenVINO™☆2,769Updated this week
- ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator☆16,455Updated this week
- Open standard for machine learning interoperability☆18,895Updated this week
- A collection of pre-trained, state-of-the-art models in the ONNX format☆8,537Updated last year
- NVIDIA® TensorRT™ is an SDK for high-performance deep learning inference on NVIDIA GPUs. This repository contains the open source compone…☆11,543Updated this week
- Open deep learning compiler stack for cpu, gpu and specialized accelerators☆12,259Updated this week
- ONNX-TensorRT: TensorRT backend for ONNX☆3,065Updated 2 months ago
- Neural Network Compression Framework for enhanced OpenVINO™ inference☆1,005Updated this week
- Compiler for Neural Network hardware accelerators☆3,289Updated 11 months ago
- PyTorch/TorchScript/FX compiler for NVIDIA GPUs using TensorRT☆2,742Updated this week
- An easy to use PyTorch to TensorRT converter☆4,729Updated 8 months ago
- High-efficiency floating-point neural network inference operators for mobile, server, and Web☆2,007Updated this week
- Tutorials for creating and using ONNX models☆3,504Updated 9 months ago
- The Triton Inference Server provides an optimized cloud and edge inferencing solution.☆9,157Updated this week
- SOTA low-bit LLM quantization (INT8/FP8/INT4/FP4/NF4) & sparsity; leading model compression techniques on TensorFlow, PyTorch, and ONNX R…☆2,387Updated last week
- A scalable inference server for models optimized with OpenVINO™☆723Updated this week
- Tensorflow Backend for ONNX☆1,301Updated last year
- State-of-the-Art Deep Learning scripts organized by models - easy to train and deploy with reproducible accuracy and performance on enter…☆14,217Updated 8 months ago
- The Compute Library is a set of computer vision and machine learning functions optimised for both Arm CPUs and GPUs using SIMD technologi…☆2,960Updated last week
- Google Brain AutoML☆6,356Updated 2 months ago
- YOLOX is a high-performance anchor-free YOLO, exceeding yolov3~v5 with MegEngine, ONNX, TensorRT, ncnn, and OpenVINO supported. Documenta…☆9,812Updated 5 months ago
- Sparsity-aware deep learning inference runtime for CPUs☆3,140Updated 9 months ago
- a language for fast, portable data-parallel computation☆6,047Updated this week
- ONNXMLTools enables conversion of models to ONNX☆1,074Updated 4 months ago
- A tool to modify ONNX models in a visualization fashion, based on Netron and Flask.☆1,482Updated 2 months ago
- Composable transformations of Python+NumPy programs: differentiate, vectorize, JIT to GPU/TPU, and more☆32,148Updated this week