onnx / onnx-tensorflow
Tensorflow Backend for ONNX
☆1,302Updated last year
Alternatives and similar repositories for onnx-tensorflow:
Users that are interested in onnx-tensorflow are comparing it to the libraries listed below
- Convert TensorFlow, Keras, Tensorflow.js and Tflite models to ONNX☆2,404Updated 2 months ago
- Tutorials for creating and using ONNX models☆3,499Updated 9 months ago
- Quantized Neural Network PACKage - mobile-optimized implementation of quantized neural network operators☆1,539Updated 5 years ago
- TensorFlow/TensorRT integration☆741Updated last year
- ONNXMLTools enables conversion of models to ONNX☆1,073Updated 3 months ago
- Convert tf.keras/Keras models to ONNX☆378Updated 3 years ago
- A toolkit to optimize ML models for deployment for Keras and TensorFlow, including quantization and pruning.☆1,532Updated 2 months ago
- Simplify your onnx model☆4,059Updated 7 months ago
- ONNX Optimizer☆696Updated 3 weeks ago
- The convertor/conversion of deep learning models for different deep learning frameworks/softwares.☆3,249Updated last year
- High-efficiency floating-point neural network inference operators for mobile, server, and Web☆2,004Updated this week
- PyTorch to Keras model convertor☆859Updated 2 years ago
- Neural Network Distiller by Intel AI Lab: a Python package for neural network compression research. https://intellabs.github.io/distille…☆4,389Updated 2 years ago
- ONNX-TensorRT: TensorRT backend for ONNX☆3,060Updated last month
- Save, Load Frozen Graph and Run Inference From Frozen Graph in TensorFlow 1.x and 2.x☆302Updated 4 years ago
- PyTorch/TorchScript/FX compiler for NVIDIA GPUs using TensorRT☆2,730Updated this week
- TVM integration into PyTorch☆452Updated 5 years ago
- MMdnn is a set of tools to help users inter-operate among different deep learning frameworks. E.g. model conversion and visualization. Co…☆5,811Updated 10 months ago
- Memory consumption and FLOP count estimates for convnets☆919Updated 6 years ago
- AIMET is a library that provides advanced quantization and compression techniques for trained neural network models.☆2,282Updated this week
- Neural Network Compression Framework for enhanced OpenVINO™ inference☆1,002Updated this week
- Bolt is a deep learning library with high performance and heterogeneous flexibility.☆940Updated 2 weeks ago
- onnxruntime-extensions: A specialized pre- and post- processing library for ONNX Runtime☆375Updated this week
- A tool to modify ONNX models in a visualization fashion, based on Netron and Flask.☆1,473Updated 2 months ago
- A performant and modular runtime for TensorFlow☆759Updated last week
- Convert ONNX model graph to Keras model format.☆201Updated 10 months ago
- Compiler for Neural Network hardware accelerators☆3,283Updated 11 months ago
- Train, Evaluate, Optimize, Deploy Computer Vision Models via OpenVINO™☆1,162Updated this week
- Code for: "And the bit goes down: Revisiting the quantization of neural networks"☆633Updated 4 years ago
- Transform ONNX model to PyTorch representation☆332Updated 5 months ago