google-ai-edge / LiteRT
LiteRT is the new name for TensorFlow Lite (TFLite). While the name is new, it's still the same trusted, high-performance runtime for on-device AI, now with an expanded vision.
☆149Updated last month
Related projects ⓘ
Alternatives and complementary repositories for LiteRT
- Supporting PyTorch models with the Google AI Edge TFLite runtime.☆371Updated this week
- ☆41Updated last week
- C++ API for ML inferencing and transfer-learning on Coral devices☆83Updated 3 months ago
- onnxruntime-extensions: A specialized pre- and post- processing library for ONNX Runtime☆338Updated this week
- ONNX Script enables developers to naturally author ONNX functions and models using a subset of Python.☆286Updated this week
- An open source light-weight and high performance inference framework for Hailo devices☆72Updated last month
- The no-code AI toolchain☆75Updated this week
- Model Compression Toolkit (MCT) is an open source project for neural network model optimization under efficient, constrained hardware. Th…☆328Updated this week
- Run Generative AI models with simple C++/Python API and using OpenVINO Runtime☆152Updated this week
- Inference Vision Transformer (ViT) in plain C/C++ with ggml☆233Updated 7 months ago
- TFLite Support is a toolkit that helps users to develop ML and deploy TFLite models onto mobile / ioT devices.☆378Updated last week
- Generative AI extensions for onnxruntime☆514Updated this week
- Pytorch to Keras/Tensorflow/TFLite conversion made intuitive☆268Updated 3 months ago
- Common utilities for ONNX converters☆251Updated 5 months ago
- The Qualcomm® AI Hub Models are a collection of state-of-the-art machine learning models optimized for performance (latency, memory etc.)…☆497Updated last week
- High-performance, optimized pre-trained template AI application pipelines for systems using Hailo devices☆94Updated last month
- Open Neural Network Exchange to C compiler.☆225Updated last week
- TFLite model analyzer & memory optimizer☆120Updated 9 months ago
- Source code for Coral Dev Board Micro☆109Updated 2 months ago
- ONNX Adapter for model-explorer☆25Updated last month
- TensorRT Model Optimizer is a unified library of state-of-the-art model optimization techniques such as quantization, pruning, distillati…☆567Updated this week
- Convert tflite to JSON and make it editable in the IDE. It also converts the edited JSON back to tflite binary.☆27Updated last year
- NVIDIA DLA-SW, the recipes and tools for running deep learning workloads on NVIDIA DLA cores for inference applications.☆180Updated 5 months ago
- The Qualcomm® AI Hub apps are a collection of state-of-the-art machine learning models optimized for performance (latency, memory etc.) a…☆63Updated last week
- ☆303Updated last week
- Qualcomm Cloud AI SDK (Platform and Apps) enable high performance deep learning inference on Qualcomm Cloud AI platforms delivering high …☆55Updated 3 weeks ago
- Self-Created Tools to convert ONNX files (NCHW) to TensorFlow/TFLite/Keras format (NHWC). The purpose of this tool is to solve the massiv…☆705Updated 3 weeks ago
- Model compression for ONNX☆74Updated this week
- A set of simple tools for splitting, merging, OP deletion, size compression, rewriting attributes and constants, OP generation, change op…☆277Updated 6 months ago
- Source code for the userspace level runtime driver for Coral.ai devices.☆185Updated 3 months ago