MollySophia / rwkv-qualcommLinks
Inference RWKV v5, v6 and v7 with Qualcomm AI Engine Direct SDK
☆72Updated this week
Alternatives and similar repositories for rwkv-qualcomm
Users that are interested in rwkv-qualcomm are comparing it to the libraries listed below
Sorting:
- Inference RWKV with multiple supported backends.☆50Updated this week
- ☆124Updated last year
- llm deploy project based onnx.☆42Updated 8 months ago
- ☆32Updated 11 months ago
- ☆84Updated 2 years ago
- A converter for llama2.c legacy models to ncnn models.☆81Updated last year
- Large Language Model Onnx Inference Framework☆35Updated 5 months ago
- qwen2 and llama3 cpp implementation☆44Updated last year
- QQQ is an innovative and hardware-optimized W4A8 quantization solution for LLMs.☆126Updated 2 months ago
- Inference TinyLlama models on ncnn☆24Updated last year
- reference implementation of the backend for llama.cpp on Android phone equipped with Qualcomm's Hexagon NPU, details can be seen at http…☆23Updated this week
- stable diffusion using mnn☆68Updated last year
- ☆18Updated 5 months ago
- Infere RWKV on NCNN☆48Updated 9 months ago
- The WorldRWKV project aims to implement training and inference across various modalities using the RWKV7 architecture. By leveraging diff…☆50Updated last month
- A converter and basic tester for rwkv onnx☆42Updated last year
- This is an inference framework for the RWKV large language model implemented purely in native PyTorch. The official native implementation…☆128Updated 11 months ago
- A Toolkit to Help Optimize Onnx Model☆159Updated this week
- mnn asr demo.☆19Updated 2 months ago
- A quantization algorithm for LLM☆141Updated last year
- Run Chinese MobileBert model on SNPE.☆15Updated 2 years ago
- CPM.cu is a lightweight, high-performance CUDA implementation for LLMs, optimized for end-device inference and featuring cutting-edge tec…☆129Updated this week
- libvits-ncnn is an ncnn implementation of the VITS library that enables cross-platform GPU-accelerated speech synthesis.🎙️💻☆62Updated 2 years ago
- A Toolkit to Help Optimize Large Onnx Model☆157Updated last year
- Code for ACM MobiCom 2024 paper "FlexNN: Efficient and Adaptive DNN Inference on Memory-Constrained Edge Devices"☆53Updated 5 months ago
- NVIDIA TensorRT Hackathon 2023复赛选题:通义千问Qwen-7B用TensorRT-LLM模型搭建及优化☆42Updated last year
- [ICML 2025] Official PyTorch implementation of "FlatQuant: Flatness Matters for LLM Quantization"☆137Updated last month
- simplify >2GB large onnx model☆58Updated 6 months ago
- This repository is a read-only mirror of https://gitlab.arm.com/kleidi/kleidiai☆46Updated this week
- llm-export can export llm model to onnx.☆293Updated 5 months ago