MooreThreads / torch_musaLinks
torch_musa is an open source repository based on PyTorch, which can make full use of the super computing power of MooreThreads graphics cards.
☆456Updated last month
Alternatives and similar repositories for torch_musa
Users that are interested in torch_musa are comparing it to the libraries listed below
Sorting:
- a lightweight LLM model inference framework☆744Updated last year
- Ascend PyTorch adapter (torch_npu). Mirror of https://gitee.com/ascend/pytorch☆461Updated this week
- A high-throughput and memory-efficient inference and serving engine for LLMs☆71Updated last year
- PaddlePaddle custom device implementaion. (『飞桨』自定义硬件接入实现)☆101Updated this week
- llm-export can export llm model to onnx.☆336Updated last month
- Run generative AI models in sophgo BM1684X/BM1688☆254Updated 2 weeks ago
- MegCC是一个运行时超轻量,高效,移植简单的深度学习模型编译器☆489Updated last year
- Machine learning compiler based on MLIR for Sophgo TPU.☆829Updated last week
- MUSA Templates for Linear Algebra Subroutines☆37Updated 9 months ago
- optimized BERT transformer inference on NVIDIA GPU. https://arxiv.org/abs/2210.03052☆476Updated last year
- Efficient operation implementation based on the Cambricon Machine Learning Unit (MLU) .☆143Updated last week
- export llama to onnx☆137Updated 11 months ago
- C++ implementation of Qwen-LM☆611Updated last year
- A CPU tool for benchmarking the peak of floating points☆569Updated this week
- Triton Documentation in Chinese Simplified / Triton 中文文档☆95Updated this week
- FlagGems is an operator library for large language models implemented in the Triton Language.☆803Updated this week
- Ascend TileLang adapter☆165Updated this week
- llm deploy project based mnn. This project has merged into MNN.☆1,613Updated 11 months ago
- PyTorch Neural Network eXchange☆657Updated this week
- DashInfer is a native LLM inference engine aiming to deliver industry-leading performance atop various hardware architectures, including …☆270Updated 4 months ago
- ☆63Updated last year
- ☆140Updated last year
- ☆622Updated last week
- ☆42Updated 2 weeks ago
- ☆517Updated last month
- [EMNLP 2024 & AAAI 2026] A powerful toolkit for compressing large models including LLM, VLM, and video generation models.☆638Updated last month
- LLaMa/RWKV onnx models, quantization and testcase☆367Updated 2 years ago
- ☆152Updated 11 months ago
- ☆433Updated 3 months ago
- Low-bit LLM inference on CPU/NPU with lookup table☆902Updated 6 months ago