google-ai-edge / LiteRTLinks
LiteRT continues the legacy of TensorFlow Lite as the trusted, high-performance runtime for on-device AI. Now with LiteRT Next, we're expanding our vision with a new generation of APIs designed for superior performance and simplified hardware acceleration. Discover what's next for on-device AI.
☆872Updated last week
Alternatives and similar repositories for LiteRT
Users that are interested in LiteRT are comparing it to the libraries listed below
Sorting:
- The Qualcomm® AI Hub Models are a collection of state-of-the-art machine learning models optimized for performance (latency, memory etc.)…☆808Updated last week
- The Qualcomm® AI Hub apps are a collection of state-of-the-art machine learning models optimized for performance (latency, memory etc.) a…☆320Updated last week
- ☆430Updated last week
- Generative AI extensions for onnxruntime☆861Updated this week
- TFLite Support is a toolkit that helps users to develop ML and deploy TFLite models onto mobile / ioT devices.☆427Updated last week
- Run Generative AI models with simple C++/Python API and using OpenVINO Runtime☆364Updated this week
- High-efficiency floating-point neural network inference operators for mobile, server, and Web☆2,144Updated this week
- onnxruntime-extensions: A specialized pre- and post- processing library for ONNX Runtime☆418Updated this week
- On-device AI across mobile, embedded and edge for PyTorch☆3,374Updated this week
- This repository is a read-only mirror of https://gitlab.arm.com/kleidi/kleidiai☆90Updated last week
- Awesome Mobile LLMs☆256Updated last week
- ONNX Script enables developers to naturally author ONNX functions and models using a subset of Python.☆404Updated this week
- AI Edge Quantizer: flexible post training quantization for LiteRT models.☆72Updated this week
- ☆164Updated 4 months ago
- No-code CLI designed for accelerating ONNX workflows☆215Updated 4 months ago
- 🤗 Optimum ExecuTorch☆69Updated last week
- Model Compression Toolkit (MCT) is an open source project for neural network model optimization under efficient, constrained hardware. Th…☆419Updated last week
- 🤗 Optimum Intel: Accelerate inference with Intel optimization tools☆502Updated this week
- A modern model graph visualizer and debugger☆1,325Updated this week
- TinyChatEngine: On-Device LLM Inference Library☆906Updated last year
- [NeurIPS 2020] MCUNet: Tiny Deep Learning on IoT Devices; [NeurIPS 2021] MCUNetV2: Memory-Efficient Patch-based Inference for Tiny Deep L…☆605Updated last year
- Open Neural Network Exchange to C compiler.☆328Updated last week
- Intel® NPU Acceleration Library☆692Updated 6 months ago
- Inference Vision Transformer (ViT) in plain C/C++ with ggml☆295Updated last year
- Low-bit LLM inference on CPU/NPU with lookup table☆876Updated 4 months ago
- ☆156Updated last month
- Olive: Simplify ML Model Finetuning, Conversion, Quantization, and Optimization for CPUs, GPUs and NPUs.☆2,159Updated last week
- A Toolkit to Help Optimize Onnx Model☆228Updated this week
- [NeurIPS 2020] MCUNet: Tiny Deep Learning on IoT Devices; [NeurIPS 2021] MCUNetV2: Memory-Efficient Patch-based Inference for Tiny Deep L…☆898Updated 11 months ago
- Efficient Inference of Transformer models☆461Updated last year