onnx / optimizerLinks
ONNX Optimizer
☆795Updated last week
Alternatives and similar repositories for optimizer
Users that are interested in optimizer are comparing it to the libraries listed below
Sorting:
- Common utilities for ONNX converters☆293Updated last month
- onnxruntime-extensions: A specialized pre- and post- processing library for ONNX Runtime☆437Updated last month
- A parser, editor and profiler tool for ONNX models.☆478Updated 3 months ago
- ONNX Script enables developers to naturally author ONNX functions and models using a subset of Python.☆420Updated last week
- A tool to modify ONNX models in a visualization fashion, based on Netron and Flask.☆1,605Updated 2 months ago
- Transform ONNX model to PyTorch representation☆345Updated 3 months ago
- TensorRT Plugin Autogen Tool☆367Updated 2 years ago
- Representation and Reference Lowering of ONNX Models in MLIR Compiler Infrastructure☆973Updated this week
- Neural Network Compression Framework for enhanced OpenVINO™ inference☆1,123Updated this week
- Convert ONNX models to PyTorch.☆725Updated 3 months ago
- Deploy your model with TensorRT quickly.☆764Updated 2 years ago
- PyTorch Neural Network eXchange☆675Updated this week
- Common source, scripts and utilities for creating Triton backends.☆366Updated this week
- A set of simple tools for splitting, merging, OP deletion, size compression, rewriting attributes and constants, OP generation, change op…☆303Updated last year
- Triton Model Analyzer is a CLI tool to help with better understanding of the compute and memory requirements of the Triton Inference Serv…☆503Updated this week
- Simplify your onnx model☆4,285Updated last week
- Examples for using ONNX Runtime for model training.☆361Updated last year
- TinyNeuralNetwork is an efficient and easy-to-use deep learning model compression framework.☆862Updated last month
- Self-Created Tools to convert ONNX files (NCHW) to TensorFlow/TFLite/Keras format (NHWC). The purpose of this tool is to solve the massiv…☆919Updated this week
- A Python-level JIT compiler designed to make unmodified PyTorch programs faster.☆1,072Updated last year
- A flexible and efficient deep neural network (DNN) compiler that generates high-performance executable from a DNN model description.☆1,006Updated last year
- Tensorflow Backend for ONNX☆1,325Updated last year
- Triton Python, C++ and Java client libraries, and GRPC-generated client examples for go, java and scala.☆677Updated last week
- PyTorch/TorchScript/FX compiler for NVIDIA GPUs using TensorRT☆2,939Updated this week
- ONNXMLTools enables conversion of models to ONNX☆1,139Updated this week
- TensorFlow/TensorRT integration☆743Updated 2 years ago
- A Toolkit to Help Optimize Large Onnx Model☆163Updated 3 months ago
- The Triton backend that allows running GPU-accelerated data pre-processing pipelines implemented in DALI's python API.☆140Updated last week
- A code generator from ONNX to PyTorch code☆142Updated 3 years ago
- A primitive library for neural network☆1,368Updated last year