jeffzhou2000 / ggml-hexagonLinks
the original reference implementation of a specified llama.cpp backend for Qualcomm Hexagon NPU on Android phone, https://github.com/ggml-org/llama.cpp/pull/12326. not maintained since Jul 15 2025
☆35Updated 4 months ago
Alternatives and similar repositories for ggml-hexagon
Users that are interested in ggml-hexagon are comparing it to the libraries listed below
Sorting:
- LLM inference in C/C++☆46Updated this week
- Inference RWKV v5, v6 and v7 with Qualcomm AI Engine Direct SDK☆87Updated 3 weeks ago
- QAI AppBuilder is designed to help developers easily execute models on WoS and Linux platforms. It encapsulates the Qualcomm® AI Runtime …☆84Updated this week
- llm deploy project based onnx.☆46Updated last year
- mperf是一个面向移动/嵌入式平台的算子性能调优工具箱☆191Updated 2 years ago
- This repository is a read-only mirror of https://gitlab.arm.com/kleidi/kleidiai☆93Updated last week
- ☆41Updated 7 months ago
- EasyNN是一个面向教学而开发的神经网络推理框架,旨在让大家0基础也能自主完成推理框架编写!☆33Updated last year
- 使用 CUDA C++ 实现的 llama 模型推理框架☆62Updated last year
- FlagTree is a unified compiler for multiple AI chips, which is forked from triton-lang/triton.☆131Updated this week
- stable diffusion using mnn☆67Updated 2 years ago
- Large Language Model Onnx Inference Framework☆36Updated 2 weeks ago
- ggml学习笔记,ggml是一个机器学习的推理框架☆18Updated last year
- ☆124Updated last year
- ☆34Updated last year
- ☆20Updated last year
- ☆168Updated this week
- llm-export can export llm model to onnx.☆328Updated 3 weeks ago
- llama 2 Inference☆43Updated 2 years ago
- Run generative AI models in sophgo BM1684X/BM1688☆253Updated this week
- ☆14Updated 2 weeks ago
- Triton adapter for Ascend. Mirror of https://gitee.com/ascend/triton-ascend☆82Updated this week
- A repo for llm on ncnn☆33Updated last week
- Run Large Language Models on RK3588 with GPU-acceleration☆117Updated 2 years ago
- 机器学习编译 陈天奇☆49Updated 2 years ago
- TensorRT encapsulation, learn, rewrite, practice.☆29Updated 3 years ago
- ☆10Updated last week
- 使用 cutlass 实现 flash-attention 精简版,具有教学意义☆50Updated last year
- High-speed and easy-use LLM serving framework for local deployment☆132Updated 3 months ago
- hands on model tuning with TVM and profile it on a Mac M1, x86 CPU, and GTX-1080 GPU.☆50Updated 2 years ago