wangzhaode / llm-export
llm-export can export llm model to onnx.
☆255Updated last week
Alternatives and similar repositories for llm-export:
Users that are interested in llm-export are comparing it to the libraries listed below
- export llama to onnx☆111Updated 3 weeks ago
- LLaMa/RWKV onnx models, quantization and testcase☆356Updated last year
- DashInfer is a native LLM inference engine aiming to deliver industry-leading performance atop various hardware architectures, including …☆208Updated this week
- Run generative AI models in sophgo BM1684X☆152Updated this week
- ☆127Updated 3 weeks ago
- simplify >2GB large onnx model☆51Updated last month
- ☆90Updated last year
- ☆591Updated 5 months ago
- run ChatGLM2-6B in BM1684X☆49Updated 10 months ago
- [EMNLP 2024 Industry Track] This is the official PyTorch implementation of "LLMC: Benchmarking Large Language Model Quantization with a V…☆382Updated this week
- ☆140Updated 8 months ago
- 基于MNN-llm的安卓手机部署大语言模型:Qwen1.5-0.5B-Chat☆62Updated 9 months ago
- stable diffusion using mnn☆65Updated last year
- a lightweight LLM model inference framework☆712Updated 9 months ago
- RTP-LLM: Alibaba's high-performance LLM inference engine for diverse applications.☆582Updated 3 months ago
- ☆302Updated 3 weeks ago
- Large Language Model Onnx Inference Framework☆28Updated this week
- ☆37Updated 2 months ago
- ☆57Updated last month
- C++ implementation of Qwen-LM☆569Updated last month
- ☆27Updated 2 months ago
- ☆33Updated this week
- LLM Inference benchmark☆377Updated 5 months ago
- optimized BERT transformer inference on NVIDIA GPU. https://arxiv.org/abs/2210.03052☆467Updated 10 months ago
- ☆124Updated last year
- ☆141Updated last week
- Ascend PyTorch adapter (torch_npu). Mirror of https://gitee.com/ascend/pytorch☆290Updated this week
- Inference code for LLaMA models☆114Updated last year
- ☆71Updated 2 years ago
- PaddlePaddle custom device implementaion. (『飞桨』自定义硬件接入实现)☆77Updated this week