google-ai-edge / LiteRT
LiteRT is the new name for TensorFlow Lite (TFLite). While the name is new, it's still the same trusted, high-performance runtime for on-device AI, now with an expanded vision.
☆386Updated this week
Alternatives and similar repositories for LiteRT
Users that are interested in LiteRT are comparing it to the libraries listed below
Sorting:
- Supporting PyTorch models with the Google AI Edge TFLite runtime.☆569Updated this week
- The Qualcomm® AI Hub Models are a collection of state-of-the-art machine learning models optimized for performance (latency, memory etc.)…☆687Updated this week
- Model Compression Toolkit (MCT) is an open source project for neural network model optimization under efficient, constrained hardware. Th…☆394Updated last week
- TFLite Support is a toolkit that helps users to develop ML and deploy TFLite models onto mobile / ioT devices.☆405Updated 3 weeks ago
- onnxruntime-extensions: A specialized pre- and post- processing library for ONNX Runtime☆385Updated this week
- The Qualcomm® AI Hub apps are a collection of state-of-the-art machine learning models optimized for performance (latency, memory etc.) a…☆189Updated this week
- ONNX Script enables developers to naturally author ONNX functions and models using a subset of Python.☆349Updated this week
- AI Edge Quantizer: flexible post training quantization for LiteRT models.☆32Updated this week
- ☆136Updated 2 months ago
- Pytorch to Keras/Tensorflow/TFLite conversion made intuitive☆308Updated 2 months ago
- Open Neural Network Exchange to C compiler.☆272Updated last month
- High-efficiency floating-point neural network inference operators for mobile, server, and Web☆2,016Updated this week
- ☆225Updated 2 years ago
- Generative AI extensions for onnxruntime☆710Updated this week
- Conversion of PyTorch Models into TFLite☆375Updated 2 years ago
- This repository is a read-only mirror of https://gitlab.arm.com/kleidi/kleidiai☆37Updated this week
- Run Generative AI models with simple C++/Python API and using OpenVINO Runtime☆274Updated this week
- [NeurIPS 2020] MCUNet: Tiny Deep Learning on IoT Devices; [NeurIPS 2021] MCUNetV2: Memory-Efficient Patch-based Inference for Tiny Deep L…☆565Updated last year
- A curated list of OpenVINO based AI projects☆132Updated 4 months ago
- ☆156Updated last month
- On-device AI across mobile, embedded and edge for PyTorch☆2,829Updated this week
- 🤗 Optimum Intel: Accelerate inference with Intel optimization tools☆464Updated this week
- Self-Created Tools to convert ONNX files (NCHW) to TensorFlow/TFLite/Keras format (NHWC). The purpose of this tool is to solve the massiv…☆791Updated last month
- Qualcomm Cloud AI SDK (Platform and Apps) enable high performance deep learning inference on Qualcomm Cloud AI platforms delivering high …☆60Updated 6 months ago
- MLPerf™ Tiny is an ML benchmark suite for extremely low-power systems such as microcontrollers☆403Updated this week
- ☆309Updated 4 months ago
- Local LLM Server with NPU Acceleration☆180Updated last week
- Pure C ONNX runtime with zero dependancies for embedded devices☆204Updated last year
- Inference Vision Transformer (ViT) in plain C/C++ with ggml☆280Updated last year
- A Toolkit to Help Optimize Onnx Model☆145Updated this week