google-ai-edge / LiteRTLinks
LiteRT, successor to TensorFlow Lite. is Google's On-device framework for high-performance ML & GenAI deployment on edge platforms, via efficient conversion, runtime, and optimization
☆1,444Updated this week
Alternatives and similar repositories for LiteRT
Users that are interested in LiteRT are comparing it to the libraries listed below
Sorting:
- Support PyTorch model conversion with LiteRT.☆935Updated this week
- ☆797Updated this week
- Qualcomm® AI Hub Models is our collection of state-of-the-art machine learning models optimized for performance (latency, memory etc.) an…☆915Updated 2 weeks ago
- The Qualcomm® AI Hub apps are a collection of state-of-the-art machine learning models optimized for performance (latency, memory etc.) a…☆369Updated 2 weeks ago
- On-device AI across mobile, embedded and edge for PyTorch☆4,258Updated this week
- Generative AI extensions for onnxruntime☆957Updated this week
- Run Generative AI models with simple C++/Python API and using OpenVINO Runtime☆433Updated this week
- TFLite Support is a toolkit that helps users to develop ML and deploy TFLite models onto mobile / ioT devices.☆433Updated this week
- onnxruntime-extensions: A specialized pre- and post- processing library for ONNX Runtime☆441Updated last week
- High-efficiency floating-point neural network inference operators for mobile, server, and Web☆2,245Updated this week
- ☆181Updated 3 weeks ago
- Self-Created Tools to convert ONNX files (NCHW) to TensorFlow/TFLite/Keras format (NHWC). The purpose of this tool is to solve the massiv…☆923Updated this week
- 🤗 Optimum ExecuTorch☆108Updated last week
- AI Edge Quantizer: flexible post training quantization for LiteRT models.☆99Updated this week
- Awesome Mobile LLMs☆301Updated 2 months ago
- This repository is a read-only mirror of https://gitlab.arm.com/kleidi/kleidiai☆113Updated last week
- Intel® NPU Acceleration Library☆703Updated 9 months ago
- Examples for using ONNX Runtime for machine learning inferencing.☆1,601Updated last week
- A Toolkit to Help Optimize Onnx Model☆409Updated this week
- Inference Vision Transformer (ViT) in plain C/C++ with ggml☆306Updated last year
- ONNX Script enables developers to naturally author ONNX functions and models using a subset of Python.☆420Updated last week
- The Hailo Model Zoo includes pre-trained models and a full building and evaluation environment☆594Updated 3 weeks ago
- No-code CLI designed for accelerating ONNX workflows☆227Updated 8 months ago
- 🤗 Optimum Intel: Accelerate inference with Intel optimization tools☆532Updated last week
- Qualcomm Cloud AI SDK (Platform and Apps) enable high performance deep learning inference on Qualcomm Cloud AI platforms delivering high …☆71Updated 2 months ago
- A modern model graph visualizer and debugger☆1,384Updated this week
- WebAssembly binding for llama.cpp - Enabling on-browser LLM inference☆993Updated last month
- Open Neural Network Exchange to C compiler.☆359Updated this week
- TinyChatEngine: On-Device LLM Inference Library☆939Updated last year
- Model Compression Toolkit (MCT) is an open source project for neural network model optimization under efficient, constrained hardware. Th…☆432Updated this week