google-ai-edge / LiteRT
LiteRT is the new name for TensorFlow Lite (TFLite). While the name is new, it's still the same trusted, high-performance runtime for on-device AI, now with an expanded vision.
☆304Updated this week
Alternatives and similar repositories for LiteRT:
Users that are interested in LiteRT are comparing it to the libraries listed below
- Supporting PyTorch models with the Google AI Edge TFLite runtime.☆474Updated this week
- TFLite Support is a toolkit that helps users to develop ML and deploy TFLite models onto mobile / ioT devices.☆394Updated this week
- onnxruntime-extensions: A specialized pre- and post- processing library for ONNX Runtime☆365Updated this week
- Model Compression Toolkit (MCT) is an open source project for neural network model optimization under efficient, constrained hardware. Th…☆371Updated this week
- ☆220Updated last year
- The Qualcomm® AI Hub Models are a collection of state-of-the-art machine learning models optimized for performance (latency, memory etc.)…☆625Updated last week
- Open Neural Network Exchange to C compiler.☆261Updated 2 months ago
- Self-Created Tools to convert ONNX files (NCHW) to TensorFlow/TFLite/Keras format (NHWC). The purpose of this tool is to solve the massiv…☆761Updated this week
- Pytorch to Keras/Tensorflow/TFLite conversion made intuitive☆294Updated this week
- The Qualcomm® AI Hub apps are a collection of state-of-the-art machine learning models optimized for performance (latency, memory etc.) a…☆159Updated 2 weeks ago
- MLPerf™ Tiny is an ML benchmark suite for extremely low-power systems such as microcontrollers☆387Updated last week
- ☆128Updated last week
- Common utilities for ONNX converters☆259Updated 3 months ago
- 🤗 Optimum Intel: Accelerate inference with Intel optimization tools☆447Updated this week
- Conversion of PyTorch Models into TFLite☆370Updated last year
- ONNX Script enables developers to naturally author ONNX functions and models using a subset of Python.☆323Updated this week
- Inference Vision Transformer (ViT) in plain C/C++ with ggml☆260Updated 11 months ago
- [NeurIPS 2020] MCUNet: Tiny Deep Learning on IoT Devices; [NeurIPS 2021] MCUNetV2: Memory-Efficient Patch-based Inference for Tiny Deep L…☆531Updated 11 months ago
- Arm Machine Learning tutorials and examples☆448Updated 3 months ago
- LLM SDK for OnnxRuntime GenAI (OGA)☆104Updated this week
- High-performance, optimized pre-trained template AI application pipelines for systems using Hailo devices☆119Updated 2 months ago
- Run Generative AI models with simple C++/Python API and using OpenVINO Runtime☆227Updated this week
- Use safetensors with ONNX 🤗☆48Updated last week
- An open source light-weight and high performance inference framework for Hailo devices☆95Updated last month
- On-device AI across mobile, embedded and edge for PyTorch☆2,584Updated this week
- [NeurIPS 2020] MCUNet: Tiny Deep Learning on IoT Devices; [NeurIPS 2021] MCUNetV2: Memory-Efficient Patch-based Inference for Tiny Deep L…☆841Updated 3 months ago
- ☆238Updated last year
- High-efficiency floating-point neural network inference operators for mobile, server, and Web☆1,979Updated this week
- Model compression for ONNX☆87Updated 3 months ago
- Generative AI extensions for onnxruntime☆645Updated this week