onnx / tensorflow-onnx
Convert TensorFlow, Keras, Tensorflow.js and Tflite models to ONNX
☆2,404Updated 2 months ago
Alternatives and similar repositories for tensorflow-onnx:
Users that are interested in tensorflow-onnx are comparing it to the libraries listed below
- Tensorflow Backend for ONNX☆1,302Updated last year
- Tutorials for creating and using ONNX models☆3,499Updated 9 months ago
- Simplify your onnx model☆4,059Updated 7 months ago
- ONNXMLTools enables conversion of models to ONNX☆1,073Updated 3 months ago
- TensorFlow/TensorRT integration☆741Updated last year
- ONNX-TensorRT: TensorRT backend for ONNX☆3,060Updated last month
- PyTorch/TorchScript/FX compiler for NVIDIA GPUs using TensorRT☆2,730Updated this week
- Convert tf.keras/Keras models to ONNX☆378Updated 3 years ago
- A tool to modify ONNX models in a visualization fashion, based on Netron and Flask.☆1,473Updated 2 months ago
- ONNX Optimizer☆696Updated 3 weeks ago
- A toolkit to optimize ML models for deployment for Keras and TensorFlow, including quantization and pruning.☆1,532Updated 2 months ago
- An easy to use PyTorch to TensorRT converter☆4,725Updated 8 months ago
- A collection of pre-trained, state-of-the-art models in the ONNX format☆8,483Updated 11 months ago
- High-efficiency floating-point neural network inference operators for mobile, server, and Web☆2,004Updated this week
- Examples for using ONNX Runtime for machine learning inferencing.☆1,354Updated last week
- AIMET is a library that provides advanced quantization and compression techniques for trained neural network models.☆2,282Updated this week
- Neural Network Distiller by Intel AI Lab: a Python package for neural network compression research. https://intellabs.github.io/distille…☆4,389Updated 2 years ago
- The convertor/conversion of deep learning models for different deep learning frameworks/softwares.☆3,249Updated last year
- MMdnn is a set of tools to help users inter-operate among different deep learning frameworks. E.g. model conversion and visualization. Co…☆5,811Updated 10 months ago
- Quantized Neural Network PACKage - mobile-optimized implementation of quantized neural network operators☆1,539Updated 5 years ago
- Save, Load Frozen Graph and Run Inference From Frozen Graph in TensorFlow 1.x and 2.x☆302Updated 4 years ago
- Neural Network Compression Framework for enhanced OpenVINO™ inference☆1,002Updated this week
- A scalable inference server for models optimized with OpenVINO™☆722Updated this week
- Convert scikit-learn models and pipelines to ONNX☆578Updated last month
- onnxruntime-extensions: A specialized pre- and post- processing library for ONNX Runtime☆375Updated this week
- NVIDIA® TensorRT™ is an SDK for high-performance deep learning inference on NVIDIA GPUs. This repository contains the open source compone…☆11,500Updated last month
- Serve, optimize and scale PyTorch models in production☆4,313Updated last week
- PyTorch to Keras model convertor☆859Updated 2 years ago
- Open standard for machine learning interoperability☆18,826Updated this week
- A performant and modular runtime for TensorFlow☆759Updated last week