Qualcomm® AI Hub Models is our collection of state-of-the-art machine learning models optimized for performance (latency, memory etc.) and ready to deploy on Qualcomm® devices.
☆1,009Apr 29, 2026Updated this week
Alternatives and similar repositories for ai-hub-models
Users that are interested in ai-hub-models are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- The Qualcomm® AI Hub apps are a collection of state-of-the-art machine learning models optimized for performance (latency, memory etc.) a…☆404Updated this week
- ☆188Updated this week
- ☆343Feb 12, 2026Updated 2 months ago
- Inference RWKV v5, v6 and v7 with Qualcomm AI Engine Direct SDK☆91Feb 14, 2026Updated 2 months ago
- AIMET is a library that provides advanced quantization and compression techniques for trained neural network models.☆2,604Updated this week
- Virtual machines for every use case on DigitalOcean • AdGet dependable uptime with 99.99% SLA, simple security tools, and predictable monthly pricing with DigitalOcean's virtual machines, called Droplets.
- On-device AI across mobile, embedded and edge for PyTorch☆4,547Updated this week
- Self-implemented NN operators for Qualcomm's Hexagon NPU☆64Sep 30, 2025Updated 7 months ago
- This project is intended to build and deploy an SNPE model on Qualcomm Devices, which are having unsupported layers which are not part of…☆10Oct 4, 2021Updated 4 years ago
- QAI AppBuilder is designed to help developers easily execute models on WoS and Linux platforms. It encapsulates the Qualcomm® AI Runtime …☆152Apr 22, 2026Updated last week
- Support PyTorch model conversion with LiteRT.☆1,007Updated this week
- This library empowers users to seamlessly port pretrained models and checkpoints on the HuggingFace (HF) hub (developed using HF transfor…☆87Updated this week
- Fast Multimodal LLM on Mobile Devices☆1,477Apr 12, 2026Updated 2 weeks ago
- Run Chinese MobileBert model on SNPE.☆15May 19, 2023Updated 2 years ago
- [EMNLP Findings 2024] MobileQuant: Mobile-friendly Quantization for On-device Language Models☆68Sep 22, 2024Updated last year
- Bare Metal GPUs on DigitalOcean Gradient AI • AdPurpose-built for serious AI teams training foundational models, running large-scale inference, and pushing the boundaries of what's possible.
- 本项目是一个通过文字生成图片的项目,基于开源模型Stable Diffusion V1.5生成可以在手机的CPU和NPU上运行的模型,包括其配套的模型运行框架。☆239Mar 29, 2024Updated 2 years ago
- AI Plugins for Windows on Snapdragon☆31Apr 21, 2026Updated last week
- On-device Speech Recognition for Android☆208Jan 24, 2026Updated 3 months ago
- workbench for learning and practicing on-device AI technology in real scenario with online-TV on Android phone, powered by ggml(llama.cpp…☆192Jun 12, 2025Updated 10 months ago
- MiniCPM on Android platform.☆640Mar 19, 2025Updated last year
- MobileLLM Optimizing Sub-billion Parameter Language Models for On-Device Use Cases. In ICML 2024.☆1,428Apr 21, 2025Updated last year
- Generative AI extensions for onnxruntime☆1,014Updated this week
- ☆49Updated this week
- LiteRT, successor to TensorFlow Lite. is Google's On-device framework for high-performance ML & GenAI deployment on edge platforms, via e…☆2,255Apr 23, 2026Updated last week
- Virtual machines for every use case on DigitalOcean • AdGet dependable uptime with 99.99% SLA, simple security tools, and predictable monthly pricing with DigitalOcean's virtual machines, called Droplets.
- A yolov7-tiny model inference applied on qualcomm snpe for pedestrian detection with embedded system.☆13Sep 23, 2024Updated last year
- MediaTek's TFLite delegate☆52Dec 8, 2025Updated 4 months ago
- Strong and Open Vision Language Assistant for Mobile Devices☆1,350Apr 15, 2024Updated 2 years ago
- A tool for converting ONNX files to LiteRT/TFLite/TensorFlow, PyTorch native code (nn.Module), TorchScript (.pt), state_dict (.pt), Expor…☆951Apr 1, 2026Updated 3 weeks ago
- YOLOv5在高通AI Engine Direct环境下进行QNN量化,CPU推理的项目☆17Sep 10, 2024Updated last year
- Demonstration of running a native LLM on Android device.☆246Apr 12, 2026Updated 2 weeks ago
- Examples for using ONNX Runtime for machine learning inferencing.☆1,634Feb 24, 2026Updated 2 months ago
- Code sample showing how to run and benchmark models on Qualcomm's Window PCs☆106Oct 4, 2024Updated last year
- This repository is a read-only mirror of https://gitlab.arm.com/kleidi/kleidiai☆137Apr 22, 2026Updated last week
- Deploy on Railway without the complexity - Free Credits Offer • AdConnect your repo and Railway handles the rest with instant previews. Quickly provision container image services, databases, and storage volumes.
- Universal LLM Deployment Engine with ML Compilation☆22,517Apr 22, 2026Updated last week
- MNN: A blazing-fast, lightweight inference engine battle-tested by Alibaba, powering high-performance on-device LLMs and Edge AI.☆15,009Updated this week
- LLM inference in C/C++☆51Updated this week
- High-speed and easy-use LLM serving framework for local deployment☆148Aug 7, 2025Updated 8 months ago
- TensorRT LLM provides users with an easy-to-use Python API to define Large Language Models (LLMs) and supports state-of-the-art optimizat…☆13,487Updated this week
- ☆18Oct 2, 2024Updated last year
- ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator☆20,355Updated this week