google-ai-edge / ai-edge-torch
Supporting PyTorch models with the Google AI Edge TFLite runtime.
☆423Updated this week
Alternatives and similar repositories for ai-edge-torch:
Users that are interested in ai-edge-torch are comparing it to the libraries listed below
- LiteRT is the new name for TensorFlow Lite (TFLite). While the name is new, it's still the same trusted, high-performance runtime for on-…☆229Updated this week
- Pytorch to Keras/Tensorflow/TFLite conversion made intuitive☆278Updated 5 months ago
- Self-Created Tools to convert ONNX files (NCHW) to TensorFlow/TFLite/Keras format (NHWC). The purpose of this tool is to solve the massiv…☆735Updated last month
- onnxruntime-extensions: A specialized pre- and post- processing library for ONNX Runtime☆349Updated this week
- Conversion of PyTorch Models into TFLite☆364Updated last year
- A set of simple tools for splitting, merging, OP deletion, size compression, rewriting attributes and constants, OP generation, change op…☆283Updated 8 months ago
- Common utilities for ONNX converters☆256Updated last month
- ☆311Updated last year
- This script converts the ONNX/OpenVINO IR model to Tensorflow's saved_model, tflite, h5, tfjs, tftrt(TensorRT), CoreML, EdgeTPU, ONNX and…☆340Updated 2 years ago
- A parser, editor and profiler tool for ONNX models.☆411Updated last week
- The Qualcomm® AI Hub Models are a collection of state-of-the-art machine learning models optimized for performance (latency, memory etc.)…☆559Updated last week
- TFLite model analyzer & memory optimizer☆121Updated 11 months ago
- Model Compression Toolkit (MCT) is an open source project for neural network model optimization under efficient, constrained hardware. Th…☆347Updated this week
- ONNX Script enables developers to naturally author ONNX functions and models using a subset of Python.☆304Updated this week
- ☆119Updated last month
- TFLite Support is a toolkit that helps users to develop ML and deploy TFLite models onto mobile / ioT devices.☆386Updated this week
- TensorRT Model Optimizer is a unified library of state-of-the-art model optimization techniques such as quantization, pruning, distillati…☆667Updated last week
- ☆213Updated last year
- The Qualcomm® AI Hub apps are a collection of state-of-the-art machine learning models optimized for performance (latency, memory etc.) a…☆105Updated last month
- Inference Vision Transformer (ViT) in plain C/C++ with ggml☆244Updated 9 months ago
- Actively maintained ONNX Optimizer☆657Updated 10 months ago
- On-device AI across mobile, embedded and edge for PyTorch☆2,407Updated this week
- [NeurIPS 2020] MCUNet: Tiny Deep Learning on IoT Devices; [NeurIPS 2021] MCUNetV2: Memory-Efficient Patch-based Inference for Tiny Deep L…☆501Updated 9 months ago
- Open Neural Network Exchange to C compiler.☆247Updated this week
- Qualcomm Cloud AI SDK (Platform and Apps) enable high performance deep learning inference on Qualcomm Cloud AI platforms delivering high …☆55Updated 2 months ago
- Count number of parameters / MACs / FLOPS for ONNX models.☆89Updated 2 months ago
- Generate saved_model, tfjs, tf-trt, EdgeTPU, CoreML, quantized tflite, ONNX, OpenVINO, Myriad Inference Engine blob and .pb from .tflite.…☆269Updated 2 years ago
- Script to typecast ONNX model parameters from INT64 to INT32.☆99Updated 8 months ago
- The no-code AI toolchain☆80Updated this week
- 🤗 Optimum Intel: Accelerate inference with Intel optimization tools☆430Updated this week