google-ai-edge / LiteRTLinks
LiteRT continues the legacy of TensorFlow Lite as the trusted, high-performance runtime for on-device AI. Now with LiteRT Next, we're expanding our vision with a new generation of APIs designed for superior performance and simplified hardware acceleration. Discover what's next for on-device AI.
☆772Updated this week
Alternatives and similar repositories for LiteRT
Users that are interested in LiteRT are comparing it to the libraries listed below
Sorting:
- Supporting PyTorch models with the Google AI Edge TFLite runtime.☆777Updated this week
- The Qualcomm® AI Hub Models are a collection of state-of-the-art machine learning models optimized for performance (latency, memory etc.)…☆782Updated 2 weeks ago
- The Qualcomm® AI Hub apps are a collection of state-of-the-art machine learning models optimized for performance (latency, memory etc.) a…☆284Updated 2 weeks ago
- On-device AI across mobile, embedded and edge for PyTorch☆3,221Updated this week
- TFLite Support is a toolkit that helps users to develop ML and deploy TFLite models onto mobile / ioT devices.☆421Updated 3 weeks ago
- Generative AI extensions for onnxruntime☆825Updated this week
- onnxruntime-extensions: A specialized pre- and post- processing library for ONNX Runtime☆411Updated this week
- AI Edge Quantizer: flexible post training quantization for LiteRT models.☆64Updated this week
- High-efficiency floating-point neural network inference operators for mobile, server, and Web☆2,113Updated this week
- This repository is a read-only mirror of https://gitlab.arm.com/kleidi/kleidiai☆77Updated this week
- Awesome Mobile LLMs☆241Updated last month
- ☆322Updated this week
- Self-Created Tools to convert ONNX files (NCHW) to TensorFlow/TFLite/Keras format (NHWC). The purpose of this tool is to solve the massiv…☆850Updated last month
- TinyChatEngine: On-Device LLM Inference Library☆892Updated last year
- Run Generative AI models with simple C++/Python API and using OpenVINO Runtime☆334Updated this week
- ☆140Updated last week
- ☆156Updated 2 months ago
- Inference Vision Transformer (ViT) in plain C/C++ with ggml☆294Updated last year
- 🤗 Optimum ExecuTorch☆64Updated last week
- Intel® NPU Acceleration Library☆689Updated 4 months ago
- Model Compression Toolkit (MCT) is an open source project for neural network model optimization under efficient, constrained hardware. Th…☆414Updated 2 months ago
- ONNX Script enables developers to naturally author ONNX functions and models using a subset of Python.☆381Updated this week
- [NeurIPS 2020] MCUNet: Tiny Deep Learning on IoT Devices; [NeurIPS 2021] MCUNetV2: Memory-Efficient Patch-based Inference for Tiny Deep L…☆599Updated last year
- Low-bit LLM inference on CPU/NPU with lookup table☆852Updated 3 months ago
- No-code CLI designed for accelerating ONNX workflows☆214Updated 3 months ago
- 🤗 Optimum Intel: Accelerate inference with Intel optimization tools☆489Updated this week
- Open Neural Network Exchange to C compiler.☆314Updated 2 weeks ago
- A Toolkit to Help Optimize Onnx Model☆208Updated last week
- A unified library of state-of-the-art model optimization techniques like quantization, pruning, distillation, speculative decoding, etc. …☆1,344Updated this week
- Examples for using ONNX Runtime for machine learning inferencing.☆1,474Updated this week