MooreThreads / vllm_musaLinks
A high-throughput and memory-efficient inference and serving engine for LLMs
☆71Updated last year
Alternatives and similar repositories for vllm_musa
Users that are interested in vllm_musa are comparing it to the libraries listed below
Sorting:
- DashInfer is a native LLM inference engine aiming to deliver industry-leading performance atop various hardware architectures, including …☆270Updated 4 months ago
- Run generative AI models in sophgo BM1684X/BM1688☆254Updated 2 weeks ago
- llm-export can export llm model to onnx.☆336Updated last month
- ☆140Updated last year
- ☆73Updated last year
- PaddlePaddle custom device implementaion. (『飞桨』自定义硬件接入实现)☆101Updated this week
- ☆180Updated 2 weeks ago
- ☆130Updated 11 months ago
- ☆52Updated last year
- Ascend PyTorch adapter (torch_npu). Mirror of https://gitee.com/ascend/pytorch☆461Updated this week
- Triton Documentation in Chinese Simplified / Triton 中文文档☆95Updated this week
- ☆66Updated 2 weeks ago
- export llama to onnx☆137Updated 11 months ago
- torch_musa is an open source repository based on PyTorch, which can make full use of the super computing power of MooreThreads graphics c…☆456Updated last month
- [EMNLP 2024 & AAAI 2026] A powerful toolkit for compressing large models including LLM, VLM, and video generation models.☆638Updated last month
- FlagScale is a large model toolkit based on open-sourced projects.☆425Updated last week
- ☆33Updated last week
- ☆517Updated last month
- ☆60Updated last year
- ☆433Updated 3 months ago
- ☆76Updated last year
- DeepSparkHub selects hundreds of application algorithms and models, covering various fields of AI and general-purpose computing, to suppo…☆69Updated last month
- Compare different hardware platforms via the Roofline Model for LLM inference tasks.☆119Updated last year
- ☆152Updated 11 months ago
- a lightweight LLM model inference framework☆744Updated last year
- Transformer related optimization, including BERT, GPT☆39Updated 2 years ago
- CPM.cu is a lightweight, high-performance CUDA implementation for LLMs, optimized for end-device inference and featuring cutting-edge tec…☆212Updated 2 months ago
- 使用 CUDA C++ 实现的 llama 模型推理框架☆62Updated last year
- Large Language Model Onnx Inference Framework☆36Updated 3 weeks ago
- run ChatGLM2-6B in BM1684X☆49Updated last year