eliberis / tflite-tools
TFLite model analyzer & memory optimizer
☆123Updated last year
Alternatives and similar repositories for tflite-tools:
Users that are interested in tflite-tools are comparing it to the libraries listed below
- generate tflite micro code which bypasses the interpreter (directly calls into kernels)☆79Updated 2 years ago
- ☆220Updated last year
- MLPerf™ Tiny is an ML benchmark suite for extremely low-power systems such as microcontrollers☆387Updated last week
- Parse TFLite models (*.tflite) EASILY with Python. Check the API at https://zhenhuaw.me/tflite/docs/☆98Updated last month
- μNAS is a neural architecture search (NAS) system that designs small-yet-powerful microcontroller-compatible neural networks.☆79Updated 4 years ago
- Python library to work with the Visual Wake Words Dataset.☆34Updated 4 years ago
- Roughly calculate FLOPs of a tflite model☆37Updated 3 years ago
- Inference of quantization aware trained networks using TensorRT☆80Updated 2 years ago
- Reference implementations of popular Binarized Neural Networks☆107Updated last week
- Count number of parameters / MACs / FLOPS for ONNX models.☆89Updated 4 months ago
- PyTorch Static Quantization Example☆38Updated 3 years ago
- This script converts the ONNX/OpenVINO IR model to Tensorflow's saved_model, tflite, h5, tfjs, tftrt(TensorRT), CoreML, EdgeTPU, ONNX and…☆340Updated 2 years ago
- Scailable ONNX python tools☆97Updated 4 months ago
- Mobilenet v1 trained on Imagenet for STM32 using extended CMSIS-NN with INT-Q quantization support☆86Updated 5 years ago
- Quantization library for PyTorch. Support low-precision and mixed-precision quantization, with hardware implementation through TVM.☆427Updated last year
- QONNX: Arbitrary-Precision Quantized Neural Networks in ONNX☆138Updated last week
- ☆29Updated 3 years ago
- Model Compression Toolkit (MCT) is an open source project for neural network model optimization under efficient, constrained hardware. Th…☆371Updated this week
- Convert tflite to JSON and make it editable in the IDE. It also converts the edited JSON back to tflite binary.☆27Updated 2 years ago
- PyTorch implementation for the APoT quantization (ICLR 2020)☆271Updated 3 months ago
- CMix-NN: Mixed Low-Precision CNN Library for Memory-Constrained Edge Devices☆41Updated 4 years ago
- Convert ONNX model graph to Keras model format.☆201Updated 8 months ago
- PyTorch implementation of Data Free Quantization Through Weight Equalization and Bias Correction.☆260Updated last year
- Highly optimized inference engine for Binarized Neural Networks☆248Updated this week
- Acuity Model Zoo☆140Updated 2 years ago
- [NeurIPS 2020] MCUNet: Tiny Deep Learning on IoT Devices; [NeurIPS 2021] MCUNetV2: Memory-Efficient Patch-based Inference for Tiny Deep L…☆531Updated 11 months ago
- Generate saved_model, tfjs, tf-trt, EdgeTPU, CoreML, quantized tflite, ONNX, OpenVINO, Myriad Inference Engine blob and .pb from .tflite.…☆267Updated 2 years ago
- ☆69Updated 2 years ago
- VeriSilicon Tensor Interface Module☆230Updated 2 months ago
- Graph Transforms to Quantize and Retrain Deep Neural Nets in TensorFlow☆168Updated 5 years ago