quic / ai-hub-modelsView external linksLinks
Qualcomm® AI Hub Models is our collection of state-of-the-art machine learning models optimized for performance (latency, memory etc.) and ready to deploy on Qualcomm® devices.
☆918Updated this week
Alternatives and similar repositories for ai-hub-models
Users that are interested in ai-hub-models are comparing it to the libraries listed below
Sorting:
- The Qualcomm® AI Hub apps are a collection of state-of-the-art machine learning models optimized for performance (latency, memory etc.) a…☆372Jan 27, 2026Updated 2 weeks ago
- ☆181Jan 22, 2026Updated 3 weeks ago
- AIMET is a library that provides advanced quantization and compression techniques for trained neural network models.☆2,558Updated this week
- ☆342Updated this week
- On-device AI across mobile, embedded and edge for PyTorch☆4,258Updated this week
- Inference RWKV v5, v6 and v7 with Qualcomm AI Engine Direct SDK☆90Feb 5, 2026Updated last week
- QAI AppBuilder is designed to help developers easily execute models on WoS and Linux platforms. It encapsulates the Qualcomm® AI Runtime …☆123Feb 6, 2026Updated last week
- Support PyTorch model conversion with LiteRT.☆935Feb 7, 2026Updated last week
- This project is intended to build and deploy an SNPE model on Qualcomm Devices, which are having unsupported layers which are not part of…☆10Oct 4, 2021Updated 4 years ago
- Self-implemented NN operators for Qualcomm's Hexagon NPU☆47Sep 30, 2025Updated 4 months ago
- Fast Multimodal LLM on Mobile Devices☆1,395Feb 3, 2026Updated last week
- Run Chinese MobileBert model on SNPE.☆15May 19, 2023Updated 2 years ago
- [EMNLP Findings 2024] MobileQuant: Mobile-friendly Quantization for On-device Language Models☆67Sep 22, 2024Updated last year
- workbench for learning and practicing on-device AI technology in real scenario with online-TV on Android phone, powered by ggml(llama.cpp…☆186Jun 12, 2025Updated 8 months ago
- AI Plugins for Windows on Snapdragon☆31May 9, 2025Updated 9 months ago
- 本项目是一个通过文字生成图片的项目,基于开源模型Stable Diffusion V1.5生成可以在手机的CPU和NPU上运行的模型,包括其配套的模型运行框架。☆234Mar 29, 2024Updated last year
- MiniCPM on Android platform.☆639Mar 19, 2025Updated 10 months ago
- LiteRT, successor to TensorFlow Lite. is Google's On-device framework for high-performance ML & GenAI deployment on edge platforms, via e…☆1,468Updated this week
- MediaTek's TFLite delegate☆51Dec 8, 2025Updated 2 months ago
- Strong and Open Vision Language Assistant for Mobile Devices☆1,330Apr 15, 2024Updated last year
- MobileLLM Optimizing Sub-billion Parameter Language Models for On-Device Use Cases. In ICML 2024.☆1,409Apr 21, 2025Updated 9 months ago
- A simple tutorial of SNPE.☆183Mar 30, 2023Updated 2 years ago
- A yolov7-tiny model inference applied on qualcomm snpe for pedestrian detection with embedded system.☆13Sep 23, 2024Updated last year
- Generative AI extensions for onnxruntime☆957Updated this week
- Demonstration of running a native LLM on Android device.☆226Updated this week
- YOLOv5在高通AI Engine Direct环境下进行QNN量化,CPU推理的项目☆16Sep 10, 2024Updated last year
- Examples for using ONNX Runtime for machine learning inferencing.☆1,605Updated this week
- Universal LLM Deployment Engine with ML Compilation☆22,039Updated this week
- A unified library of SOTA model optimization techniques like quantization, pruning, distillation, speculative decoding, etc. It compresse…☆1,964Updated this week
- TensorRT LLM provides users with an easy-to-use Python API to define Large Language Models (LLMs) and supports state-of-the-art optimizat…☆12,867Updated this week
- Self-Created Tools to convert ONNX files (NCHW) to TensorFlow/TFLite/Keras format (NHWC). The purpose of this tool is to solve the massiv…☆924Updated this week
- MNN is a blazing fast, lightweight deep learning framework, battle-tested by business-critical use cases in Alibaba. Full multimodal LLM …☆14,104Updated this week
- GLM Series Edge Models☆158Jun 12, 2025Updated 8 months ago
- ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator☆19,276Updated this week
- High-speed and easy-use LLM serving framework for local deployment☆146Aug 7, 2025Updated 6 months ago
- A primitive library for neural network☆1,368Nov 24, 2024Updated last year
- Deep learning inference SW framework based on TensorFlow Lite for Aarch64 Linux with GPU and Hexagon delegate☆12Mar 11, 2025Updated 11 months ago
- The Compute Library is a set of computer vision and machine learning functions optimised for both Arm CPUs and GPUs using SIMD technologi…☆3,113Feb 6, 2026Updated last week
- llm deploy project based mnn. This project has merged into MNN.☆1,614Jan 20, 2025Updated last year