google-ai-edge / LiteRT
LiteRT is the new name for TensorFlow Lite (TFLite). While the name is new, it's still the same trusted, high-performance runtime for on-device AI, now with an expanded vision.
☆359Updated this week
Alternatives and similar repositories for LiteRT:
Users that are interested in LiteRT are comparing it to the libraries listed below
- Supporting PyTorch models with the Google AI Edge TFLite runtime.☆543Updated this week
- The Qualcomm® AI Hub Models are a collection of state-of-the-art machine learning models optimized for performance (latency, memory etc.)…☆668Updated last week
- Model Compression Toolkit (MCT) is an open source project for neural network model optimization under efficient, constrained hardware. Th…☆386Updated this week
- TFLite Support is a toolkit that helps users to develop ML and deploy TFLite models onto mobile / ioT devices.☆398Updated last month
- AI Edge Quantizer: flexible post training quantization for LiteRT models.☆32Updated this week
- ONNX Script enables developers to naturally author ONNX functions and models using a subset of Python.☆337Updated this week
- Local LLM Server with NPU Acceleration☆144Updated this week
- Self-Created Tools to convert ONNX files (NCHW) to TensorFlow/TFLite/Keras format (NHWC). The purpose of this tool is to solve the massiv…☆784Updated last week
- The Qualcomm® AI Hub apps are a collection of state-of-the-art machine learning models optimized for performance (latency, memory etc.) a…☆178Updated last week
- Open Neural Network Exchange to C compiler.☆272Updated last week
- Pytorch to Keras/Tensorflow/TFLite conversion made intuitive☆304Updated last month
- onnxruntime-extensions: A specialized pre- and post- processing library for ONNX Runtime☆373Updated last week
- Qualcomm Cloud AI SDK (Platform and Apps) enable high performance deep learning inference on Qualcomm Cloud AI platforms delivering high …☆58Updated 5 months ago
- ☆131Updated last month
- An open source light-weight and high performance inference framework for Hailo devices☆104Updated 2 weeks ago
- This repository is a read-only mirror of https://gitlab.arm.com/kleidi/kleidiai☆28Updated this week
- Visualize ONNX models with model-explorer☆31Updated last month
- Inference Vision Transformer (ViT) in plain C/C++ with ggml☆266Updated last year
- A collection of machine learning models for vision optimized for NXP products☆21Updated 10 months ago
- Awesome Mobile LLMs☆166Updated 3 weeks ago
- Model compression for ONNX☆91Updated 5 months ago
- CMSIS-NN Library☆264Updated this week
- Common utilities for ONNX converters☆266Updated 4 months ago
- Conversion of PyTorch Models into TFLite☆374Updated 2 years ago
- A Toolkit to Help Optimize Onnx Model☆139Updated this week
- MLPerf™ Tiny is an ML benchmark suite for extremely low-power systems such as microcontrollers☆397Updated last week
- C++ API for ML inferencing and transfer-learning on Coral devices☆89Updated 8 months ago
- Run generative AI models in sophgo BM1684X☆196Updated this week
- ☆156Updated 3 weeks ago
- High-performance, optimized pre-trained template AI application pipelines for systems using Hailo devices☆131Updated 3 weeks ago