MooreThreads / vllm_musa
A high-throughput and memory-efficient inference and serving engine for LLMs
☆46Updated 5 months ago
Alternatives and similar repositories for vllm_musa:
Users that are interested in vllm_musa are comparing it to the libraries listed below
- DashInfer is a native LLM inference engine aiming to deliver industry-leading performance atop various hardware architectures, including …☆240Updated 3 weeks ago
- run ChatGLM2-6B in BM1684X☆49Updated last year
- LLM101n: Let's build a Storyteller 中文版☆130Updated 7 months ago
- 支持中文场景的的小语言模型 llama2.c-zh☆145Updated last year
- ☆46Updated this week
- vLLM Documentation in Chinese Simplified / vLLM 中文文档☆55Updated 2 months ago
- ☆159Updated this week
- run chatglm3-6b in BM1684X☆38Updated last year
- ☆139Updated 11 months ago
- ☆127Updated 3 months ago
- Triton Documentation in Chinese Simplified / Triton 中文文档☆62Updated 2 months ago
- ☆27Updated 4 months ago
- Large Language Model Onnx Inference Framework☆32Updated 2 months ago
- PaddlePaddle custom device implementaion. (『飞桨』自定义硬件接入实现)☆82Updated this week
- Compare different hardware platforms via the Roofline Model for LLM inference tasks.☆93Updated last year
- 使用 CUDA C++ 实现的 llama 模型推理框架☆48Updated 4 months ago
- llm-export can export llm model to onnx.☆274Updated 2 months ago
- Run generative AI models in sophgo BM1684X☆193Updated this week
- 大模型部署实战:TensorRT-LLM, Triton Inference Server, vLLM☆26Updated last year
- ☆90Updated last year
- 基于MNN-llm的安卓手机部署大语言模型:Qwen1.5-0.5B-Chat☆72Updated 11 months ago
- Explore LLM model deployment based on AXera's AI chips☆87Updated 2 weeks ago
- Community maintained hardware plugin for vLLM on Ascend☆393Updated this week
- ☆39Updated 5 months ago
- export llama to onnx☆120Updated 3 months ago
- ☆58Updated 4 months ago
- ☆78Updated last year
- ☆30Updated last year
- Transformer related optimization, including BERT, GPT☆17Updated last year
- ☆26Updated last week