onnx / tensorflow-onnx
Convert TensorFlow, Keras, Tensorflow.js and Tflite models to ONNX
☆2,376Updated 3 weeks ago
Alternatives and similar repositories for tensorflow-onnx:
Users that are interested in tensorflow-onnx are comparing it to the libraries listed below
- Tensorflow Backend for ONNX☆1,295Updated 11 months ago
- Simplify your onnx model☆3,993Updated 6 months ago
- ONNX-TensorRT: TensorRT backend for ONNX☆3,028Updated last week
- ONNXMLTools enables conversion of models to ONNX☆1,054Updated last month
- Tutorials for creating and using ONNX models☆3,459Updated 7 months ago
- PyTorch/TorchScript/FX compiler for NVIDIA GPUs using TensorRT☆2,688Updated this week
- Convert tf.keras/Keras models to ONNX☆378Updated 3 years ago
- A tool to modify ONNX models in a visualization fashion, based on Netron and Flask.☆1,436Updated this week
- TensorFlow/TensorRT integration☆740Updated last year
- An easy to use PyTorch to TensorRT converter☆4,682Updated 6 months ago
- Examples for using ONNX Runtime for machine learning inferencing.☆1,312Updated last month
- Actively maintained ONNX Optimizer☆673Updated last month
- A collection of pre-trained, state-of-the-art models in the ONNX format☆8,309Updated 10 months ago
- Neural Network Compression Framework for enhanced OpenVINO™ inference☆978Updated this week
- A toolkit to optimize ML models for deployment for Keras and TensorFlow, including quantization and pruning.☆1,523Updated 3 weeks ago
- MMdnn is a set of tools to help users inter-operate among different deep learning frameworks. E.g. model conversion and visualization. Co…☆5,804Updated 9 months ago
- NVIDIA® TensorRT™ is an SDK for high-performance deep learning inference on NVIDIA GPUs. This repository contains the open source compone…☆11,256Updated last month
- Self-Created Tools to convert ONNX files (NCHW) to TensorFlow/TFLite/Keras format (NHWC). The purpose of this tool is to solve the massiv…☆756Updated 2 weeks ago
- The Triton Inference Server provides an optimized cloud and edge inferencing solution.☆8,826Updated this week
- Save, Load Frozen Graph and Run Inference From Frozen Graph in TensorFlow 1.x and 2.x☆301Updated 4 years ago
- TensorRT MODNet, YOLOv4, YOLOv3, SSD, MTCNN, and GoogLeNet☆1,766Updated 7 months ago
- AIMET is a library that provides advanced quantization and compression techniques for trained neural network models.☆2,232Updated this week
- Open standard for machine learning interoperability☆18,525Updated this week
- Deploy your model with TensorRT quickly.☆763Updated last year
- A scalable inference server for models optimized with OpenVINO™☆708Updated this week
- Quantized Neural Network PACKage - mobile-optimized implementation of quantized neural network operators☆1,536Updated 5 years ago
- High-efficiency floating-point neural network inference operators for mobile, server, and Web☆1,968Updated this week
- ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator☆15,802Updated this week
- Train, Evaluate, Optimize, Deploy Computer Vision Models via OpenVINO™☆1,160Updated this week
- Neural Network Distiller by Intel AI Lab: a Python package for neural network compression research. https://intellabs.github.io/distille…☆4,371Updated last year