LiteRT, successor to TensorFlow Lite. is Google's On-device framework for high-performance ML & GenAI deployment on edge platforms, via efficient conversion, runtime, and optimization
☆1,963Mar 20, 2026Updated last week
Alternatives and similar repositories for LiteRT
Users that are interested in LiteRT are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Support PyTorch model conversion with LiteRT.☆965Mar 20, 2026Updated last week
- ☆990Updated this week
- AI Edge Quantizer: flexible post training quantization for LiteRT models.☆107Updated this week
- On-device AI across mobile, embedded and edge for PyTorch☆4,415Updated this week
- High-efficiency floating-point neural network inference operators for mobile, server, and Web☆2,281Updated this week
- Managed hosting for WordPress and PHP on Cloudways • AdManaged hosting with the flexibility to host WordPress, Magento, Laravel, or PHP apps, on multiple cloud providers. Cloudways by DigitalOcean.
- Infrastructure to enable deployment of ML models to low-power resource-constrained embedded targets (including microcontrollers and digit…☆2,814Mar 19, 2026Updated last week
- A modern model graph visualizer and debugger☆1,410Mar 17, 2026Updated last week
- The Qualcomm® AI Hub apps are a collection of state-of-the-art machine learning models optimized for performance (latency, memory etc.) a…☆390Mar 13, 2026Updated 2 weeks ago
- TFLite Support is a toolkit that helps users to develop ML and deploy TFLite models onto mobile / ioT devices.☆434Mar 19, 2026Updated last week
- [EMNLP Findings 2024] MobileQuant: Mobile-friendly Quantization for On-device Language Models☆67Sep 22, 2024Updated last year
- ☆15Dec 4, 2024Updated last year
- Qualcomm® AI Hub Models is our collection of state-of-the-art machine learning models optimized for performance (latency, memory etc.) an…☆956Updated this week
- This repository is a read-only mirror of https://gitlab.arm.com/kleidi/kleidiai☆124Mar 18, 2026Updated last week
- A gallery that showcases on-device ML/GenAI use cases and allows people to try and use models locally.☆15,421Updated this week
- GPU virtual machines on DigitalOcean Gradient AI • AdGet to production fast with high-performance AMD and NVIDIA GPUs you can spin up in seconds. The definition of operational simplicity.
- ☆185Mar 16, 2026Updated last week
- ☆2,598Updated this week
- Cross-platform, customizable ML solutions for live and streaming media.☆34,307Updated this week
- ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator☆19,643Updated this week
- A machine learning compiler for GPUs, CPUs, and ML accelerators☆4,100Updated this week
- A retargetable MLIR-based machine learning compiler and runtime toolkit.☆3,670Updated this week
- ☆18Jul 22, 2025Updated 8 months ago
- On-device Neural Engine☆562Updated this week
- lightweight, standalone C++ inference engine for Google's Gemma models.☆6,755Mar 19, 2026Updated last week
- Wordpress hosting with auto-scaling on Cloudways • AdFully Managed hosting built for WordPress-powered businesses that need reliable, auto-scalable hosting. Cloudways SafeUpdates now available.
- Tensor library for machine learning☆14,252Mar 16, 2026Updated last week
- Low-latency AI engine for mobile devices & wearables☆4,520Updated this week
- Let's use Qualcomm NPU in Android☆18Feb 18, 2025Updated last year
- A tool for converting ONNX files to LiteRT/TFLite/TensorFlow, PyTorch native code (nn.Module), TorchScript (.pt), state_dict (.pt), Expor…☆936Mar 20, 2026Updated last week
- Official inference framework for 1-bit LLMs☆35,906Mar 10, 2026Updated 2 weeks ago
- MNN: A blazing-fast, lightweight inference engine battle-tested by Alibaba, powering high-performance on-device LLMs and Edge AI.☆14,618Mar 20, 2026Updated last week
- Universal LLM Deployment Engine with ML Compilation☆22,246Mar 18, 2026Updated last week
- Fast Multimodal LLM on Mobile Devices☆1,437Mar 18, 2026Updated last week
- Development repository for the Triton language and compiler☆18,708Updated this week
- End-to-end encrypted cloud storage - Proton Drive • AdSpecial offer: 40% Off Yearly / 80% Off First Month. Protect your most important files, photos, and documents from prying eyes.
- A python library for converting Pytorch modules into a circle model that is a lightweight and efficient representation in ONE designed fo…☆16Updated this week
- LLM inference in C/C++☆98,911Updated this week
- Examples for using ONNX Runtime for machine learning inferencing.☆1,632Feb 24, 2026Updated last month
- TT-NN operator library, and TT-Metalium low level kernel programming model.☆1,385Mar 20, 2026Updated last week
- ☆528Mar 17, 2026Updated last week
- A high-throughput and memory-efficient inference and serving engine for LLMs☆74,135Updated this week
- Simple tool for partial optimization of ONNX. Further optimize some models that cannot be optimized with onnx-optimizer and onnxsim by se…☆19May 7, 2024Updated last year