DakeQQ / Native-LLM-for-Android
Demonstration of running a native LLM on Android device.
☆67Updated this week
Related projects ⓘ
Alternatives and complementary repositories for Native-LLM-for-Android
- 基于MNN-llm的安卓手机部署大语言模型:Qwen1.5-0.5B-Chat☆48Updated 7 months ago
- Inference rwkv5 or rwkv6 with Qualcomm AI Engine Direct SDK☆36Updated this week
- 本项目是一个通过文字生成图片的项目,基于开源模型Stable Diffusion V1.5生成可以在手机的CPU和NPU上运行的模型,包括其配套的模型运行框架。☆105Updated 7 months ago
- ☆123Updated 10 months ago
- 使用Android手机的CPU推理stable diffusion☆141Updated 11 months ago
- 参考自mlc-llm,个人尝试在android手机上部署大模型并运行☆61Updated 3 months ago
- 使用Android cpu 运行 RWKV V4 ONNX☆65Updated last year
- stable diffusion using mnn☆62Updated last year
- NVIDIA TensorRT Hackathon 2023复赛选题:通义千问Qwen-7B用TensorRT-LLM模型搭建及优化☆40Updated last year
- qwen2 and llama3 cpp implementation☆34Updated 5 months ago
- llm-export can export llm model to onnx.☆226Updated this week
- NeRF in NCNN with c++ & vulkan☆67Updated last year
- a naive example of LivePortrait infer by ncnn☆34Updated 3 months ago
- ncnn version of CodeFormer☆98Updated last year
- 大模型部署实战:TensorRT-LLM, Triton Inference Server, vLLM☆26Updated 8 months ago
- Port of Facebook's LLaMA model in C/C++☆72Updated this week
- GPT2⚡NCNN⚡中文对话⚡x86⚡Android☆79Updated 2 years ago
- ☆28Updated 3 months ago
- run ChatGLM2-6B in BM1684X☆48Updated 8 months ago
- simplify >2GB large onnx model☆42Updated 8 months ago
- DashInfer is a native LLM inference engine aiming to deliver industry-leading performance atop various hardware architectures, including …☆135Updated 2 months ago
- segment anything model (SAM) infer by ncnn on Android mobile phone☆27Updated last year
- 天池 NVIDIA TensorRT Hackathon 2023 —— 生成式AI模型优化赛 初赛第三名方案☆47Updated last year
- Run Chinese MobileBert model on SNPE.☆13Updated last year
- SAM and lama inpaint,包含QT的GUI交互界面,实现了交互式可实时显示结果的画点、画框进行SAM,然后通过进行Inpaint,具体操作看readme里的视频。☆38Updated 9 months ago
- ☆25Updated 11 months ago
- segment-anything based mnn☆34Updated 10 months ago
- Run generative AI models in sophgo BM1684X☆120Updated this week
- libvits-ncnn is an ncnn implementation of the VITS library that enables cross-platform GPU-accelerated speech synthesis.🎙️💻☆55Updated last year
- A converter for llama2.c legacy models to ncnn models.☆82Updated 10 months ago