google-ai-edge / LiteRT
LiteRT is the new name for TensorFlow Lite (TFLite). While the name is new, it's still the same trusted, high-performance runtime for on-device AI, now with an expanded vision.
☆138Updated last month
Related projects ⓘ
Alternatives and complementary repositories for LiteRT
- Supporting PyTorch models with the Google AI Edge TFLite runtime.☆364Updated this week
- The Qualcomm® AI Hub Models are a collection of state-of-the-art machine learning models optimized for performance (latency, memory etc.)…☆483Updated last week
- ONNX Script enables developers to naturally author ONNX functions and models using a subset of Python.☆282Updated this week
- onnxruntime-extensions: A specialized pre- and post- processing library for ONNX Runtime☆334Updated this week
- TFLite Support is a toolkit that helps users to develop ML and deploy TFLite models onto mobile / ioT devices.☆377Updated 2 weeks ago
- The no-code AI toolchain☆74Updated 2 weeks ago
- Model Compression Toolkit (MCT) is an open source project for neural network model optimization under efficient, constrained hardware. Th…☆324Updated this week
- Inference Vision Transformer (ViT) in plain C/C++ with ggml☆229Updated 6 months ago
- An open source light-weight and high performance inference framework for Hailo devices☆69Updated last month
- Common utilities for ONNX converters☆251Updated 4 months ago
- Model compression for ONNX☆73Updated 3 weeks ago
- High-performance, optimized pre-trained template AI application pipelines for systems using Hailo devices☆88Updated last month
- Self-Created Tools to convert ONNX files (NCHW) to TensorFlow/TFLite/Keras format (NHWC). The purpose of this tool is to solve the massiv…☆699Updated 2 weeks ago
- ☆303Updated this week
- C++ API for ML inferencing and transfer-learning on Coral devices☆83Updated 2 months ago
- TFLite model analyzer & memory optimizer☆120Updated 9 months ago
- Open Neural Network Exchange to C compiler.☆221Updated last week
- TensorRT Model Optimizer is a unified library of state-of-the-art model optimization techniques such as quantization, pruning, distillati…☆536Updated this week
- Run Generative AI models with simple C++/Python API and using OpenVINO Runtime☆145Updated this week
- Convert tflite to JSON and make it editable in the IDE. It also converts the edited JSON back to tflite binary.☆27Updated last year
- Stable Diffusion in pure C/C++☆59Updated last year
- A high-throughput and memory-efficient inference and serving engine for LLMs☆250Updated 3 weeks ago
- On-device AI across mobile, embedded and edge for PyTorch☆2,135Updated this week
- 🤗 Optimum Intel: Accelerate inference with Intel optimization tools☆407Updated this week
- Source code for Coral Dev Board Micro☆109Updated 2 months ago
- A set of simple tools for splitting, merging, OP deletion, size compression, rewriting attributes and constants, OP generation, change op…☆276Updated 6 months ago
- Source code for the userspace level runtime driver for Coral.ai devices.☆184Updated 2 months ago
- Generative AI extensions for onnxruntime☆502Updated this week
- ☆102Updated 3 weeks ago
- Pytorch to Keras/Tensorflow/TFLite conversion made intuitive☆267Updated 2 months ago