MooreThreads / vllm_musaLinks
A high-throughput and memory-efficient inference and serving engine for LLMs
☆65Updated last year
Alternatives and similar repositories for vllm_musa
Users that are interested in vllm_musa are comparing it to the libraries listed below
Sorting:
- DashInfer is a native LLM inference engine aiming to deliver industry-leading performance atop various hardware architectures, including …☆267Updated 2 months ago
- Run generative AI models in sophgo BM1684X/BM1688☆251Updated last week
- llm-export can export llm model to onnx.☆317Updated last week
- ☆50Updated last year
- run ChatGLM2-6B in BM1684X☆50Updated last year
- ☆64Updated last week
- Ascend PyTorch adapter (torch_npu). Mirror of https://gitee.com/ascend/pytorch☆444Updated last month
- ☆129Updated 10 months ago
- ☆70Updated last year
- FlagScale is a large model toolkit based on open-sourced projects.☆364Updated last week
- ☆31Updated this week
- ☆175Updated this week
- ☆139Updated last year
- PaddlePaddle custom device implementaion. (『飞桨』自定义硬件接入实现)☆97Updated last week
- Triton Documentation in Chinese Simplified / Triton 中文文档☆87Updated 6 months ago
- LLM 推理服务性能测试☆44Updated last year
- ☆508Updated last month
- vLLM Documentation in Chinese Simplified / vLLM 中文文档☆115Updated 2 weeks ago
- A powerful toolkit for compressing large models including LLM, VLM, and video generation models.☆599Updated 2 months ago
- export llama to onnx☆136Updated 10 months ago
- CPM.cu is a lightweight, high-performance CUDA implementation for LLMs, optimized for end-device inference and featuring cutting-edge tec…☆201Updated 3 weeks ago
- ☆59Updated 11 months ago
- This repo is used for archiving my notes, codes and materials of cs learning.☆58Updated last week
- Deploying LLMs offline on the NVIDIA Jetson platform marks the dawn of a new era in embodied intelligence, where devices can function ind…☆102Updated last year
- ☆430Updated last month
- C++ implementation of Qwen-LM☆606Updated 10 months ago
- torch_musa is an open source repository based on PyTorch, which can make full use of the super computing power of MooreThreads graphics c…☆437Updated last week
- LLaMa/RWKV onnx models, quantization and testcase☆367Updated 2 years ago
- Compare different hardware platforms via the Roofline Model for LLM inference tasks.☆116Updated last year
- a lightweight LLM model inference framework☆738Updated last year