MooreThreads / torch_musaLinks
torch_musa is an open source repository based on PyTorch, which can make full use of the super computing power of MooreThreads graphics cards.
☆418Updated 3 weeks ago
Alternatives and similar repositories for torch_musa
Users that are interested in torch_musa are comparing it to the libraries listed below
Sorting:
- Ascend PyTorch adapter (torch_npu). Mirror of https://gitee.com/ascend/pytorch☆388Updated this week
- a lightweight LLM model inference framework☆731Updated last year
- A high-throughput and memory-efficient inference and serving engine for LLMs☆52Updated 8 months ago
- PaddlePaddle custom device implementaion. (『飞桨』自定义硬件接入实现)☆86Updated this week
- ☆48Updated last year
- llm-export can export llm model to onnx.☆300Updated 6 months ago
- MegCC是一个运行时超轻量,高效,移植简单的深度学习模型编译器☆486Updated 8 months ago
- llm deploy project based mnn. This project has merged into MNN.☆1,597Updated 5 months ago
- A CPU tool for benchmarking the peak of floating points☆556Updated last week
- Triton Documentation in Chinese Simplified / Triton 中文文档☆74Updated 3 months ago
- Run generative AI models in sophgo BM1684X/BM1688☆225Updated this week
- Efficient operation implementation based on the Cambricon Machine Learning Unit (MLU) .☆123Updated 3 weeks ago
- Low-bit LLM inference on CPU/NPU with lookup table☆823Updated last month
- Machine learning compiler based on MLIR for Sophgo TPU.☆757Updated last week
- ☆30Updated last week
- This is an inference framework for the RWKV large language model implemented purely in native PyTorch. The official native implementation…☆130Updated 11 months ago
- ☆611Updated last year
- C++ implementation of Qwen-LM☆596Updated 7 months ago
- DashInfer is a native LLM inference engine aiming to deliver industry-leading performance atop various hardware architectures, including …☆259Updated last month
- FlagGems is an operator library for large language models implemented in the Triton Language.☆624Updated this week
- LLaMa/RWKV onnx models, quantization and testcase☆363Updated 2 years ago
- PyTorch Neural Network eXchange☆602Updated last week
- optimized BERT transformer inference on NVIDIA GPU. https://arxiv.org/abs/2210.03052☆475Updated last year
- ☆463Updated this week
- ☆139Updated last year
- ☆149Updated 6 months ago
- [EMNLP 2024 Industry Track] This is the official PyTorch implementation of "LLMC: Benchmarking Large Language Model Quantization with a V…☆512Updated this week
- ☆428Updated last week
- export llama to onnx☆129Updated 6 months ago
- ☆128Updated 6 months ago