MooreThreads / vllm_musaLinks
A high-throughput and memory-efficient inference and serving engine for LLMs
☆51Updated 7 months ago
Alternatives and similar repositories for vllm_musa
Users that are interested in vllm_musa are comparing it to the libraries listed below
Sorting:
- DashInfer is a native LLM inference engine aiming to deliver industry-leading performance atop various hardware architectures, including …☆254Updated last week
- run ChatGLM2-6B in BM1684X☆49Updated last year
- 支持中文场景的的小语言模型 llama2.c-zh☆147Updated last year
- Run generative AI models in sophgo BM1684X/BM1688☆216Updated this week
- Triton Documentation in Chinese Simplified / Triton 中文文档☆71Updated last month
- ☆27Updated 7 months ago
- llm-export can export llm model to onnx.☆293Updated 4 months ago
- LLM101n: Let's build a Storyteller 中文版☆131Updated 9 months ago
- vLLM Documentation in Chinese Simplified / vLLM 中文文档☆75Updated 3 weeks ago
- PaddlePaddle custom device implementaion. (『飞桨』自定义硬件接入实现)☆84Updated this week
- ☆166Updated this week
- LLM 推理服务性能测试☆40Updated last year
- run DeepSeek-R1 GGUFs on KTransformers☆234Updated 3 months ago
- ☆139Updated last year
- ☆127Updated 5 months ago
- ☆44Updated 7 months ago
- Compare different hardware platforms via the Roofline Model for LLM inference tasks.☆100Updated last year
- DeepSparkHub selects hundreds of application algorithms and models, covering various fields of AI and general-purpose computing, to suppo…☆64Updated last week
- [EMNLP 2024 Industry Track] This is the official PyTorch implementation of "LLMC: Benchmarking Large Language Model Quantization with a V…