ling0322 / libllmLinks
Efficient inference of large language models.
☆149Updated last week
Alternatives and similar repositories for libllm
Users that are interested in libllm are comparing it to the libraries listed below
Sorting:
- ncnn和pnnx格式编辑器☆137Updated 11 months ago
- ☆125Updated last year
- Detect CPU features with single-file☆422Updated last week
- Tiny C++ LLM inference implementation from scratch☆66Updated 3 weeks ago
- A converter for llama2.c legacy models to ncnn models.☆81Updated last year
- Inference TinyLlama models on ncnn☆24Updated 2 years ago
- llm deploy project based onnx.☆44Updated 11 months ago
- a simple general program language☆99Updated last month
- Inference RWKV v5, v6 and v7 with Qualcomm AI Engine Direct SDK☆83Updated this week
- ☆33Updated last year
- Make a minimal OpenCV runable on any where, WIP☆84Updated 2 years ago
- GPT2⚡NCNN⚡中文对话⚡x86⚡Android☆81Updated 3 years ago
- 分层解耦的深度学习推理引擎☆75Updated 7 months ago
- stable diffusion using mnn☆67Updated 2 years ago
- Benchmark your NCNN models on 3DS(or crash)☆10Updated last year
- 关于自建AI推理引擎的手册,从0开始你需要知道的所有事情☆270Updated 3 years ago
- 将MNN拆解的简易前向推理框架(for study!)☆23Updated 4 years ago
- CPU inference for the DeepSeek family of large language models in C++☆314Updated 4 months ago
- MegCC是一个运行时超轻量,高效,移植简单的深度学习模型编译器☆486Updated 11 months ago
- Infere RWKV on NCNN☆49Updated last year
- This is a demo how to write a high performance convolution run on apple silicon☆54Updated 3 years ago
- NVIDIA TensorRT Hackathon 2023复赛选题:通义千问Qwen-7B用TensorRT-LLM模型搭建及优化☆42Updated last year
- ☆32Updated last year
- Snapdragon Neural Processing Engine (SNPE) SDKThe Snapdragon Neural Processing Engine (SNPE) is a Qualcomm Snapdragon software accelerate…☆35Updated 3 years ago
- A Flexible Framework for Experiencing Cutting-edge LLM Inference Optimizations☆43Updated 5 months ago
- ☆41Updated 2 years ago
- ggml学习笔记,ggml是一个机器学习的推理框架☆18Updated last year
- ☆84Updated 2 years ago
- Self-trained Large Language Models based on Meta LLaMa☆30Updated 2 years ago
- CPM.cu is a lightweight, high-performance CUDA implementation for LLMs, optimized for end-device inference and featuring cutting-edge tec…☆197Updated 2 weeks ago