MooreThreads / torch_musaLinks
torch_musa is an open source repository based on PyTorch, which can make full use of the super computing power of MooreThreads graphics cards.
☆435Updated last week
Alternatives and similar repositories for torch_musa
Users that are interested in torch_musa are comparing it to the libraries listed below
Sorting:
- Ascend PyTorch adapter (torch_npu). Mirror of https://gitee.com/ascend/pytorch☆434Updated last week
- a lightweight LLM model inference framework☆739Updated last year
- A high-throughput and memory-efficient inference and serving engine for LLMs☆62Updated 10 months ago
- PaddlePaddle custom device implementaion. (『飞桨』自定义硬件接入实现)☆95Updated this week
- llm-export can export llm model to onnx.☆312Updated 2 weeks ago
- ☆53Updated last year
- C++ implementation of Qwen-LM☆605Updated 9 months ago
- MegCC是一个运行时超轻量,高效,移植简单的深度学习模型编译器☆486Updated 11 months ago
- llm deploy project based mnn. This project has merged into MNN.☆1,600Updated 8 months ago
- MUSA Templates for Linear Algebra Subroutines☆30Updated 6 months ago
- Run generative AI models in sophgo BM1684X/BM1688☆243Updated last week
- ☆118Updated last year
- A CPU tool for benchmarking the peak of floating points☆563Updated 2 months ago
- ☆141Updated last year
- Triton Documentation in Chinese Simplified / Triton 中文文档☆82Updated 5 months ago
- ☆501Updated last week
- export llama to onnx☆135Updated 8 months ago
- Efficient operation implementation based on the Cambricon Machine Learning Unit (MLU) .☆132Updated 2 weeks ago
- Machine learning compiler based on MLIR for Sophgo TPU.☆792Updated 3 weeks ago
- FlagGems is an operator library for large language models implemented in the Triton Language.☆677Updated this week
- llama 2 Inference☆41Updated last year
- DashInfer is a native LLM inference engine aiming to deliver industry-leading performance atop various hardware architectures, including …☆264Updated last month
- Compiler Infrastructure for Neural Networks☆147Updated 2 years ago
- ☆59Updated 10 months ago
- optimized BERT transformer inference on NVIDIA GPU. https://arxiv.org/abs/2210.03052☆478Updated last year
- ☆61Updated last week
- Low-bit LLM inference on CPU/NPU with lookup table☆857Updated 3 months ago
- ☆428Updated this week
- ☆616Updated last year
- ☆35Updated last month