hisrg / SNPELinks
Snapdragon Neural Processing Engine (SNPE) SDKThe Snapdragon Neural Processing Engine (SNPE) is a Qualcomm Snapdragon software accelerated runtime for the execution of deep neural networks. With SNPE, users can: Execute an arbitrarily deep neural network Execute the network on the SnapdragonTM CPU, the AdrenoTM GPU or the HexagonTM DSP. Debug t…
☆34Updated 3 years ago
Alternatives and similar repositories for SNPE
Users that are interested in SNPE are comparing it to the libraries listed below
Sorting:
- 将MNN拆解的简易前向推理框架(for study!)☆23Updated 4 years ago
- ☆41Updated 2 years ago
- MegEngine到其他框架的转换器☆70Updated 2 years ago
- Inference TinyLlama models on ncnn☆24Updated 2 years ago
- ☆24Updated 2 years ago
- ☆125Updated last year
- Sample projects for InferenceHelper, a Helper Class for Deep Learning Inference Frameworks: TensorFlow Lite, TensorRT, OpenCV, ncnn, MNN,…☆21Updated 3 years ago
- NVIDIA TensorRT Hackathon 2023复赛选题:通义千问Qwen-7B用TensorRT-LLM模型搭建及优化☆42Updated last year
- A converter for llama2.c legacy models to ncnn models.☆81Updated last year
- ☆33Updated last year
- C++ implementations for various tokenizers (sentencepiece, tiktoken etc).☆35Updated this week
- Tencent NCNN with added CUDA support☆69Updated 4 years ago
- An easy way to run, test, benchmark and tune OpenCL kernel files☆23Updated 2 years ago
- A Toolkit to Help Optimize Large Onnx Model☆158Updated last year
- NeRF in NCNN with c++ & vulkan☆68Updated 2 years ago
- stable diffusion using mnn☆66Updated last year
- ☆25Updated 4 years ago
- ☆84Updated 2 years ago
- ☆27Updated 2 months ago
- A tool convert TensorRT engine/plan to a fake onnx☆41Updated 2 years ago
- Large Language Model Onnx Inference Framework☆36Updated 8 months ago
- Mobile App Open☆60Updated this week
- Wanwu models release, code will be released soon☆24Updated 3 years ago
- A faster implementation of OpenCV-CUDA that uses OpenCV objects, and more!☆53Updated this week
- ☆67Updated 3 years ago
- llm deploy project based onnx.☆43Updated 11 months ago
- Count number of parameters / MACs / FLOPS for ONNX models.☆94Updated 10 months ago
- Inference RWKV v5, v6 and v7 with Qualcomm AI Engine Direct SDK☆79Updated last week
- ONNX Command-Line Toolbox☆35Updated 11 months ago
- an example of segment-anything infer by ncnn☆124Updated 2 years ago