google-ai-edge / LiteRTLinks
LiteRT is the new name for TensorFlow Lite (TFLite). While the name is new, it's still the same trusted, high-performance runtime for on-device AI, now with an expanded vision.
☆469Updated this week
Alternatives and similar repositories for LiteRT
Users that are interested in LiteRT are comparing it to the libraries listed below
Sorting:
- Supporting PyTorch models with the Google AI Edge TFLite runtime.☆620Updated this week
- TFLite Support is a toolkit that helps users to develop ML and deploy TFLite models onto mobile / ioT devices.☆406Updated last month
- Run Generative AI models with simple C++/Python API and using OpenVINO Runtime☆282Updated this week
- The Qualcomm® AI Hub apps are a collection of state-of-the-art machine learning models optimized for performance (latency, memory etc.) a…☆207Updated 2 weeks ago
- onnxruntime-extensions: A specialized pre- and post- processing library for ONNX Runtime☆390Updated this week
- 🤗 Optimum Intel: Accelerate inference with Intel optimization tools☆467Updated this week
- The Qualcomm® AI Hub Models are a collection of state-of-the-art machine learning models optimized for performance (latency, memory etc.)…☆706Updated this week
- Generative AI extensions for onnxruntime☆728Updated this week
- A curated list of OpenVINO based AI projects☆134Updated 5 months ago
- Self-Created Tools to convert ONNX files (NCHW) to TensorFlow/TFLite/Keras format (NHWC). The purpose of this tool is to solve the massiv…☆803Updated last week
- Inference Vision Transformer (ViT) in plain C/C++ with ggml☆285Updated last year
- A Toolkit to Help Optimize Onnx Model☆153Updated this week
- Model Compression Toolkit (MCT) is an open source project for neural network model optimization under efficient, constrained hardware. Th…☆398Updated last week
- Visualize ONNX models with model-explorer☆34Updated 2 weeks ago
- AI Edge Quantizer: flexible post training quantization for LiteRT models.☆41Updated this week
- ☆229Updated 2 years ago
- 🤗 Optimum ExecuTorch☆46Updated last week
- No-code CLI designed for accelerating ONNX workflows☆192Updated 2 weeks ago
- Pytorch to Keras/Tensorflow/TFLite conversion made intuitive☆311Updated 2 months ago
- ☆140Updated 3 months ago
- MLPerf™ Tiny is an ML benchmark suite for extremely low-power systems such as microcontrollers☆410Updated this week
- High-efficiency floating-point neural network inference operators for mobile, server, and Web☆2,039Updated this week
- A modern model graph visualizer and debugger☆1,212Updated last week
- An innovative library for efficient LLM inference via low-bit quantization☆348Updated 9 months ago
- High-performance, optimized pre-trained template AI application pipelines for systems using Hailo devices☆139Updated 2 months ago
- [NeurIPS 2020] MCUNet: Tiny Deep Learning on IoT Devices; [NeurIPS 2021] MCUNetV2: Memory-Efficient Patch-based Inference for Tiny Deep L…☆567Updated last year
- Awesome Mobile LLMs☆195Updated 2 months ago
- Arm NN ML Software. The code here is a read-only mirror of https://review.mlplatform.org/admin/repos/ml/armnn☆1,263Updated this week
- This repository is a read-only mirror of https://gitlab.arm.com/kleidi/kleidiai☆43Updated this week
- Repository for OpenVINO's extra modules☆125Updated 2 weeks ago