google-ai-edge / LiteRTLinks
LiteRT continues the legacy of TensorFlow Lite as the trusted, high-performance runtime for on-device AI. Now with LiteRT Next, we're expanding our vision with a new generation of APIs designed for superior performance and simplified hardware acceleration. Discover what's next for on-device AI.
☆595Updated this week
Alternatives and similar repositories for LiteRT
Users that are interested in LiteRT are comparing it to the libraries listed below
Sorting:
- Supporting PyTorch models with the Google AI Edge TFLite runtime.☆678Updated this week
- ☆134Updated 5 months ago
- The Qualcomm® AI Hub Models are a collection of state-of-the-art machine learning models optimized for performance (latency, memory etc.)…☆719Updated 2 weeks ago
- AI Edge Quantizer: flexible post training quantization for LiteRT models.☆49Updated this week
- TFLite Support is a toolkit that helps users to develop ML and deploy TFLite models onto mobile / ioT devices.☆408Updated 2 months ago
- ☆234Updated this week
- Open Neural Network Exchange to C compiler.☆296Updated 2 weeks ago
- Efficient Inference of Transformer models☆436Updated 10 months ago
- onnxruntime-extensions: A specialized pre- and post- processing library for ONNX Runtime☆393Updated last week
- Inference Vision Transformer (ViT) in plain C/C++ with ggml☆288Updated last year
- A Toolkit to Help Optimize Onnx Model☆159Updated this week
- 🤗 Optimum ExecuTorch☆53Updated this week
- Generative AI extensions for onnxruntime☆740Updated this week
- [NeurIPS 2020] MCUNet: Tiny Deep Learning on IoT Devices; [NeurIPS 2021] MCUNetV2: Memory-Efficient Patch-based Inference for Tiny Deep L…☆579Updated last year
- On-device AI across mobile, embedded and edge for PyTorch☆2,968Updated this week
- The Qualcomm® AI Hub apps are a collection of state-of-the-art machine learning models optimized for performance (latency, memory etc.) a…☆216Updated 2 weeks ago
- High-efficiency floating-point neural network inference operators for mobile, server, and Web☆2,051Updated this week
- Awesome Mobile LLMs☆204Updated 3 weeks ago
- ☆231Updated 2 years ago
- ONNX Script enables developers to naturally author ONNX functions and models using a subset of Python.☆358Updated this week
- ☆915Updated last year
- This repository is a read-only mirror of https://gitlab.arm.com/kleidi/kleidiai☆51Updated this week
- Model Compression Toolkit (MCT) is an open source project for neural network model optimization under efficient, constrained hardware. Th…☆400Updated this week
- ☆143Updated 3 months ago
- Self-Created Tools to convert ONNX files (NCHW) to TensorFlow/TFLite/Keras format (NHWC). The purpose of this tool is to solve the massiv…☆814Updated last week
- A unified library of state-of-the-art model optimization techniques like quantization, pruning, distillation, speculative decoding, etc. …☆1,006Updated last week
- No-code CLI designed for accelerating ONNX workflows☆196Updated 2 weeks ago
- A minimalistic C++ Jinja templating engine for LLM chat templates☆156Updated last month
- ☆158Updated last week
- TinyChatEngine: On-Device LLM Inference Library☆865Updated 11 months ago