google-ai-edge / LiteRTLinks
LiteRT, successor to TensorFlow Lite. is Google's On-device framework for high-performance ML & GenAI deployment on edge platforms, via efficient conversion, runtime, and optimization
☆933Updated this week
Alternatives and similar repositories for LiteRT
Users that are interested in LiteRT are comparing it to the libraries listed below
Sorting:
- Supporting PyTorch models with the Google AI Edge TFLite runtime.☆828Updated this week
- The Qualcomm® AI Hub Models are a collection of state-of-the-art machine learning models optimized for performance (latency, memory etc.)…☆831Updated 2 weeks ago
- The Qualcomm® AI Hub apps are a collection of state-of-the-art machine learning models optimized for performance (latency, memory etc.) a…☆338Updated this week
- ☆468Updated this week
- Generative AI extensions for onnxruntime☆878Updated this week
- ☆184Updated this week
- TFLite Support is a toolkit that helps users to develop ML and deploy TFLite models onto mobile / ioT devices.☆426Updated this week
- On-device AI across mobile, embedded and edge for PyTorch☆3,507Updated this week
- Run Generative AI models with simple C++/Python API and using OpenVINO Runtime☆371Updated this week
- onnxruntime-extensions: A specialized pre- and post- processing library for ONNX Runtime☆424Updated this week
- Self-Created Tools to convert ONNX files (NCHW) to TensorFlow/TFLite/Keras format (NHWC). The purpose of this tool is to solve the massiv…☆875Updated 3 weeks ago
- High-efficiency floating-point neural network inference operators for mobile, server, and Web☆2,170Updated this week
- 🤗 Optimum Intel: Accelerate inference with Intel optimization tools☆507Updated this week
- This repository is a read-only mirror of https://gitlab.arm.com/kleidi/kleidiai☆93Updated this week
- ONNX Script enables developers to naturally author ONNX functions and models using a subset of Python.☆408Updated this week
- AI Edge Quantizer: flexible post training quantization for LiteRT models.☆76Updated this week
- 🤗 Optimum ExecuTorch☆77Updated this week
- ☆166Updated this week
- Awesome Mobile LLMs☆270Updated 3 weeks ago
- Examples for using ONNX Runtime for machine learning inferencing.☆1,530Updated this week
- Intel® NPU Acceleration Library☆694Updated 6 months ago
- Inference Vision Transformer (ViT) in plain C/C++ with ggml☆298Updated last year
- Low-bit LLM inference on CPU/NPU with lookup table☆887Updated 5 months ago
- A Toolkit to Help Optimize Onnx Model☆236Updated last week
- Advanced Quantization Algorithm for LLMs and VLMs, with support for CPU, Intel GPU, CUDA and HPU.☆701Updated this week
- TinyChatEngine: On-Device LLM Inference Library☆923Updated last year
- ONNX Optimizer☆770Updated 2 weeks ago
- No-code CLI designed for accelerating ONNX workflows☆216Updated 5 months ago
- MobileLLM Optimizing Sub-billion Parameter Language Models for On-Device Use Cases. In ICML 2024.☆1,390Updated 6 months ago
- Common utilities for ONNX converters☆283Updated 2 months ago