MooreThreads / vllm_musaLinks
A high-throughput and memory-efficient inference and serving engine for LLMs
☆69Updated last year
Alternatives and similar repositories for vllm_musa
Users that are interested in vllm_musa are comparing it to the libraries listed below
Sorting:
- DashInfer is a native LLM inference engine aiming to deliver industry-leading performance atop various hardware architectures, including …☆267Updated 3 months ago
- Run generative AI models in sophgo BM1684X/BM1688☆253Updated last week
- PaddlePaddle custom device implementaion. (『飞桨』自定义硬 件接入实现)☆100Updated last week
- Ascend PyTorch adapter (torch_npu). Mirror of https://gitee.com/ascend/pytorch☆458Updated last week
- ☆130Updated 11 months ago
- Triton Documentation in Chinese Simplified / Triton 中文文档☆91Updated last week
- ☆140Updated last year
- llm-export can export llm model to onnx.☆330Updated last month
- ☆52Updated last year
- torch_musa is an open source repository based on PyTorch, which can make full use of the super computing power of MooreThreads graphics c…☆443Updated last week
- FlagScale is a large model toolkit based on open-sourced projects.☆412Updated last week
- export llama to onnx☆137Updated 11 months ago
- ☆178Updated this week
- Compare different hardware platforms via the Roofline Model for LLM inference tasks.☆119Updated last year
- ☆431Updated 2 months ago
- ☆33Updated this week
- ☆65Updated 2 weeks ago
- ☆72Updated last year
- a lightweight LLM model inference framework☆744Updated last year
- CPM.cu is a lightweight, high-performance CUDA implementation for LLMs, optimized for end-device inference and featuring cutting-edge tec…☆205Updated last month
- [EMNLP 2024 & AAAI 2026] A powerful toolkit for compressing large models including LLM, VLM, and video generation models.☆625Updated last week
- LLaMa/RWKV onnx models, quantization and testcase☆368Updated 2 years ago
- Transformer related optimization, including BERT, GPT☆39Updated 2 years ago
- ☆513Updated last week
- 支持中文场景的的小语言模型 llama2.c-zh☆150Updated last year
- run ChatGLM2-6B in BM1684X☆50Updated last year
- ☆60Updated last year
- LLM 推理服务性能测试☆44Updated last year
- LLM Inference benchmark☆431Updated last year
- LLM101n: Let's build a Storyteller 中文版☆135Updated last year