MooreThreads / vllm_musaLinks
A high-throughput and memory-efficient inference and serving engine for LLMs
☆76Updated last year
Alternatives and similar repositories for vllm_musa
Users that are interested in vllm_musa are comparing it to the libraries listed below
Sorting:
- DashInfer is a native LLM inference engine aiming to deliver industry-leading performance atop various hardware architectures, including …☆274Updated 5 months ago
- Run generative AI models in sophgo BM1684X/BM1688☆263Updated last week
- PaddlePaddle custom device implementaion. (『飞桨』自定义硬件接入实现)☆101Updated this week
- llm-export can export llm model to onnx.☆344Updated 3 months ago
- Ascend PyTorch adapter (torch_npu). Mirror of https://gitee.com/ascend/pytorch☆483Updated this week
- ☆130Updated last year
- ☆74Updated last week
- ☆141Updated last year
- Triton Documentation in Chinese Simplified / Triton 中文文档☆99Updated last month
- ☆182Updated this week
- ☆55Updated last year
- run ChatGLM2-6B in BM1684X☆49Updated last year
- torch_musa is an open source repository based on PyTorch, which can make full use of the super computing power of MooreThreads graphics c…☆471Updated last week
- FlagScale is a large model toolkit based on open-sourced projects.☆471Updated this week
- Compare different hardware platforms via the Roofline Model for LLM inference tasks.☆120Updated last year
- export llama to onnx☆137Updated last year
- ☆523Updated last week
- CPM.cu is a lightweight, high-performance CUDA implementation for LLMs, optimized for end-device inference and featuring cutting-edge tec…☆223Updated 2 weeks ago
- ☆33Updated last month
- LLM Inference benchmark☆433Updated last year
- ☆60Updated last year
- vLLM Documentation in Chinese Simplified / vLLM 中文文档☆153Updated last month
- ☆155Updated 10 months ago
- run DeepSeek-R1 GGUFs on KTransformers☆260Updated 10 months ago
- [EMNLP 2024 & AAAI 2026] A powerful toolkit for compressing large models including LLMs, VLMs, and video generative models.☆672Updated 2 months ago
- llama 2 Inference☆43Updated 2 years ago
- Transformer related optimization, including BERT, GPT☆17Updated 2 years ago
- FlagTree is a unified compiler supporting multiple AI chip backends for custom Deep Learning operations, which is forked from triton-lang…☆197Updated this week
- Transformer related optimization, including BERT, GPT☆39Updated 2 years ago
- LLM101n: Let's build a Storyteller 中文版☆138Updated last year