MooreThreads / vllm_musaLinks
A high-throughput and memory-efficient inference and serving engine for LLMs
☆51Updated 8 months ago
Alternatives and similar repositories for vllm_musa
Users that are interested in vllm_musa are comparing it to the libraries listed below
Sorting:
- Run generative AI models in sophgo BM1684X/BM1688☆220Updated this week
- DashInfer is a native LLM inference engine aiming to deliver industry-leading performance atop various hardware architectures, including …☆256Updated 3 weeks ago
- ☆168Updated this week
- run ChatGLM2-6B in BM1684X☆49Updated last year
- vLLM Documentation in Chinese Simplified / vLLM 中文文档☆80Updated last month
- PaddlePaddle custom device implementaion. (『飞桨』自定义硬件接入实现)☆85Updated this week
- ☆44Updated 7 months ago
- Triton Documentation in Chinese Simplified / Triton 中文文档☆71Updated 2 months ago
- ☆51Updated last week
- llm-export can export llm model to onnx.☆295Updated 5 months ago
- run DeepSeek-R1 GGUFs on KTransformers☆236Updated 3 months ago
- ☆27Updated 7 months ago
- ☆139Updated last year
- LLM101n: Let's build a Storyteller 中文版☆131Updated 10 months ago
- ☆128Updated 6 months ago
- Explore LLM model deployment based on AXera's AI chips☆107Updated last week
- LLM 推理服务性能测试☆42Updated last year
- 大模型部署实战:TensorRT-LLM, Triton Inference Server, vLLM☆26Updated last year
- 支持中文场景的的小语言模型 llama2.c-zh☆147Updated last year
- unify-easy-llm(ULM)旨在打造一个简易的一键式大模型训练工具,支持Nvidia GPU、Ascend NPU等不同硬件以及常用的大模型。☆55Updated 11 months ago
- ☆26Updated 2 weeks ago
- run chatglm3-6b in BM1684X☆39Updated last year
- torch_musa is an open source repository based on PyTorch, which can make full use of the super computing power of MooreThreads graphics c…☆414Updated this week
- Port of Facebook's LLaMA model in C/C++☆52Updated 2 months ago
- Ascend PyTorch adapter (torch_npu). Mirror of https://gitee.com/ascend/pytorch☆377Updated this week
- FlagTree is a unified compiler for multiple AI chips, which is forked from triton-lang/triton.☆53Updated this week
- ☆58Updated 7 months ago
- export llama to onnx☆127Updated 6 months ago
- ☆45Updated last year
- ☆310Updated 6 months ago