Deelvin / apache-tvm-tutorialsLinks
☆10Updated 2 years ago
Alternatives and similar repositories for apache-tvm-tutorials
Users that are interested in apache-tvm-tutorials are comparing it to the libraries listed below
Sorting:
- The Triton backend that allows running GPU-accelerated data pre-processing pipelines implemented in DALI's python API.☆140Updated last week
- The Triton backend for TensorRT.☆84Updated 2 weeks ago
- This script converts the ONNX/OpenVINO IR model to Tensorflow's saved_model, tflite, h5, tfjs, tftrt(TensorRT), CoreML, EdgeTPU, ONNX and…☆345Updated 3 years ago
- The Triton backend for the ONNX Runtime.☆172Updated this week
- Inference of quantization aware trained networks using TensorRT☆83Updated 3 years ago
- Scailable ONNX python tools☆98Updated last year
- ONNX Runtime Inference C++ Example☆257Updated 10 months ago
- Convert ONNX models to PyTorch.☆725Updated 3 months ago
- Deep Learning Inference benchmark. Supports OpenVINO™ toolkit, TensorFlow, TensorFlow Lite, ONNX Runtime, OpenCV DNN, MXNet, PyTorch, Apa…☆35Updated last week
- A code generator from ONNX to PyTorch code☆142Updated 3 years ago
- Triton Model Navigator is an inference toolkit designed for optimizing and deploying Deep Learning models with a focus on NVIDIA GPUs.☆216Updated last week
- Conversion of PyTorch Models into TFLite☆399Updated 2 years ago
- This repository deploys YOLOv4 as an optimized TensorRT engine to Triton Inference Server☆285Updated 3 years ago
- Parse TFLite models (*.tflite) EASILY with Python. Check the API at https://zhenhuaw.me/tflite/docs/☆104Updated last year
- An example of how to use the multiprocessing package along with PyTorch.☆21Updated 5 years ago
- A set of simple tools for splitting, merging, OP deletion, size compression, rewriting attributes and constants, OP generation, change op…☆303Updated last year
- Model Compression Toolkit (MCT) is an open source project for neural network model optimization under efficient, constrained hardware. Th…☆432Updated last week
- Accelerate PyTorch models with ONNX Runtime☆367Updated this week
- Count number of parameters / MACs / FLOPS for ONNX models.☆95Updated last year
- PyTorch to TensorFlow Lite converter☆183Updated last year
- Common utilities for ONNX converters☆293Updated last month
- A Toolkit to Help Optimize Large Onnx Model☆163Updated 3 months ago
- ☆116Updated 5 years ago
- How to deploy open source models using DeepStream and Triton Inference Server☆86Updated last year
- ☆52Updated 5 years ago
- torch::deploy (multipy for non-torch uses) is a system that lets you get around the GIL problem by running multiple Python interpreters i…☆182Updated last month
- Repository for OpenVINO's extra modules☆162Updated last week
- The Triton backend for the PyTorch TorchScript models.☆172Updated 3 weeks ago
- Advanced inference pipeline using NVIDIA Triton Inference Server for CRAFT Text detection (Pytorch), included converter from Pytorch -> O…☆33Updated 4 years ago
- OpenVINO backend for Triton.☆37Updated 3 weeks ago