Tlntin / qwen-ascend-llmLinks
☆50Updated 10 months ago
Alternatives and similar repositories for qwen-ascend-llm
Users that are interested in qwen-ascend-llm are comparing it to the libraries listed below
Sorting:
- ☆60Updated this week
- llm-export can export llm model to onnx.☆308Updated 2 weeks ago
- DashInfer is a native LLM inference engine aiming to deliver industry-leading performance atop various hardware architectures, including …☆264Updated last month
- ☆90Updated 2 years ago
- simplify >2GB large onnx model☆63Updated 9 months ago
- export llama to onnx☆134Updated 8 months ago
- run ChatGLM2-6B in BM1684X☆49Updated last year
- LLM101n: Let's build a Storyteller 中文版☆132Updated last year
- ☆27Updated 10 months ago
- ☢️ TensorRT 2023复赛——基于TensorRT-LLM的Llama模型推断加速优化☆50Updated last year
- 基于MNN-llm的安卓手机部署大语言模型:Qwen1.5-0.5B-Chat☆84Updated last year
- NVIDIA TensorRT Hackathon 2023复赛选题:通义千问Qwen-7B用TensorRT-LLM模型搭建及优化☆42Updated last year
- LLM 推理服务性能测试☆44Updated last year
- PDF解析工具:GOT的vLLM加速实现,MinerU做布局识别裁剪、GOT做表格公式解析,实现RAG中的pdf解析☆62Updated 10 months ago
- ☆174Updated this week
- Model compression toolkit engineered for enhanced usability, comprehensiveness, and efficiency.☆122Updated last week
- 一些大语言模型和多模态模型的生态,主要包括跨模态搜索、投机解码、QAT量化、多模态量化、ChatBot、OCR☆188Updated last month
- Large Language Model Onnx Inference Framework☆36Updated 8 months ago
- GLM Series Edge Models☆149Updated 3 months ago
- Deploying LLMs offline on the NVIDIA Jetson platform marks the dawn of a new era in embodied intelligence, where devices can function ind…☆103Updated last year
- Transformer related optimization, including BERT, GPT☆17Updated 2 years ago
- 训练一个对中文支持更好的LLaVA模型,并开源训练代码和数据。☆70Updated last year
- A high-throughput and memory-efficient inference and serving engine for LLMs☆60Updated 10 months ago
- unify-easy-llm(ULM)旨在打造一个简易的一键式大模型训练工具,支持Nvidia GPU、Ascend NPU等不同硬件以及常用的大模型。☆57Updated last year
- A high-throughput and memory-efficient inference and serving engine for LLMs☆39Updated 2 weeks ago
- vLLM Documentation in Chinese Simplified / vLLM 中文文档☆98Updated last week
- 基于昇腾310芯片的大语言模型部署☆22Updated last year
- Compare multiple optimization methods on triton to imporve model service performance☆53Updated last year
- llm deploy project based onnx.☆44Updated 11 months ago
- 大模型部署实战:TensorRT-LLM, Triton Inference Server, vLLM☆26Updated last year