MooreThreads / vllm_musaLinks
A high-throughput and memory-efficient inference and serving engine for LLMs
☆52Updated 8 months ago
Alternatives and similar repositories for vllm_musa
Users that are interested in vllm_musa are comparing it to the libraries listed below
Sorting:
- DashInfer is a native LLM inference engine aiming to deliver industry-leading performance atop various hardware architectures, including …☆259Updated last month
- Run generative AI models in sophgo BM1684X/BM1688☆225Updated this week
- torch_musa is an open source repository based on PyTorch, which can make full use of the super computing power of MooreThreads graphics c …☆418Updated 3 weeks ago
- Ascend PyTorch adapter (torch_npu). Mirror of https://gitee.com/ascend/pytorch☆388Updated this week
- PaddlePaddle custom device implementaion. (『飞桨』自定义硬件接入实现)☆86Updated this week
- ☆169Updated last week
- Community maintained hardware plugin for vLLM on Ascend☆881Updated this week
- ☆463Updated this week
- llm-export can export llm model to onnx.☆300Updated 6 months ago
- Triton Documentation in Chinese Simplified / Triton 中 文文档☆74Updated 3 months ago
- vLLM Documentation in Chinese Simplified / vLLM 中文文档☆84Updated 2 months ago
- ☆47Updated 8 months ago
- run DeepSeek-R1 GGUFs on KTransformers☆242Updated 4 months ago
- FlagScale is a large model toolkit based on open-sourced projects.☆325Updated this week
- ☆428Updated last week
- ☆53Updated 2 weeks ago
- [EMNLP 2024 Industry Track] This is the official PyTorch implementation of "LLMC: Benchmarking Large Language Model Quantization with a V…☆512Updated this week
- ☆128Updated 6 months ago
- 支持中文场景的的小语言模型 llama2.c-zh☆147Updated last year
- High-performance inference framework for large language models, focusing on efficiency, flexibility, and availability.☆1,159Updated this week
- a lightweight LLM model inference framework☆731Updated last year
- 一种任务级GPU算力分时调度的高性能深度学习训练平台☆671Updated last year
- run ChatGLM2-6B in BM1684X☆49Updated last year
- LLM 推理服务性能测试☆42Updated last year
- ☆139Updated last year
- LLM101n: Let's build a Storyteller 中文版☆131Updated 11 months ago
- Low-bit LLM inference on CPU/NPU with lookup table☆823Updated last month
- C++ implementation of Qwen-LM☆596Updated 7 months ago
- LLM Inference benchmark☆422Updated 11 months ago
- ☆27Updated 8 months ago