PINTO0309 / tflite2json2tfliteLinks
Convert tflite to JSON and make it editable in the IDE. It also converts the edited JSON back to tflite binary.
☆27Updated 2 years ago
Alternatives and similar repositories for tflite2json2tflite
Users that are interested in tflite2json2tflite are comparing it to the libraries listed below
Sorting:
- Count number of parameters / MACs / FLOPS for ONNX models.☆92Updated 7 months ago
- A very simple tool for situations where optimization with onnx-simplifier would exceed the Protocol Buffers upper file size limit of 2GB,…☆17Updated last year
- Model compression for ONNX☆96Updated 6 months ago
- Inference of quantization aware trained networks using TensorRT☆81Updated 2 years ago
- A very simple tool that compresses the overall size of the ONNX model by aggregating duplicate constant values as much as possible.☆52Updated 2 years ago
- edge/mobile transformer based Vision DNN inference benchmark☆16Updated 4 months ago
- Simple tool for partial optimization of ONNX. Further optimize some models that cannot be optimized with onnx-optimizer and onnxsim by se…☆19Updated last year
- Exports the ONNX file to a JSON file and JSON dict.☆33Updated 2 years ago
- A Toolkit to Help Optimize Large Onnx Model☆158Updated last year
- Roughly calculate FLOPs of a tflite model☆38Updated 3 years ago
- Parse TFLite models (*.tflite) EASILY with Python. Check the API at https://zhenhuaw.me/tflite/docs/☆100Updated 4 months ago
- PyTorch Quantization Aware Training Example☆136Updated last year
- MegEngine到其他框架的转换器☆69Updated 2 years ago
- TFLite model analyzer & memory optimizer☆127Updated last year
- AI Edge Quantizer: flexible post training quantization for LiteRT models.☆41Updated this week
- ONNX Python Examples☆16Updated 2 years ago
- QONNX: Arbitrary-Precision Quantized Neural Networks in ONNX☆149Updated this week
- Very simple NCHW and NHWC conversion tool for ONNX. Change to the specified input order for each and every input OP. Also, change the cha…☆25Updated last month
- ☆149Updated 2 years ago
- Benchmark inference speed of CNNs with various quantization methods in Pytorch+TensorRT with Jetson Nano/Xavier☆56Updated 2 years ago
- A code generator from ONNX to PyTorch code☆138Updated 2 years ago
- A tool convert TensorRT engine/plan to a fake onnx☆39Updated 2 years ago
- ONNX Command-Line Toolbox☆35Updated 7 months ago
- A Toolkit to Help Optimize Onnx Model☆148Updated last week
- New operators for the ReferenceEvaluator, new kernels for onnxruntime, CPU, CUDA☆32Updated 2 months ago
- A set of simple tools for splitting, merging, OP deletion, size compression, rewriting attributes and constants, OP generation, change op…☆293Updated last year
- Large Language Model Onnx Inference Framework☆35Updated 4 months ago
- simplify >2GB large onnx model☆57Updated 6 months ago
- ONNX converter and optimizer scirpts for Kneron hardware.☆39Updated last year
- [ICML 2023] SmoothQuant: Accurate and Efficient Post-Training Quantization for Large Language Models☆23Updated last year