MooreThreads / torch_musaLinks
torch_musa is an open source repository based on PyTorch, which can make full use of the super computing power of MooreThreads graphics cards.
☆414Updated this week
Alternatives and similar repositories for torch_musa
Users that are interested in torch_musa are comparing it to the libraries listed below
Sorting:
- Ascend PyTorch adapter (torch_npu). Mirror of https://gitee.com/ascend/pytorch☆377Updated this week
- a lightweight LLM model inference framework☆730Updated last year
- ☆118Updated last year
- A high-throughput and memory-efficient inference and serving engine for LLMs☆51Updated 8 months ago
- PaddlePaddle custom device implementaion. (『飞桨』自定义硬件接入实现)☆85Updated this week
- llm-export can export llm model to onnx.☆295Updated 5 months ago
- FlagGems is an operator library for large language models implemented in the Triton Language.☆583Updated this week
- ☆29Updated last week
- Triton Documentation in Chinese Simplified / Triton 中文文档☆71Updated 2 months ago
- DeepSparkHub selects hundreds of application algorithms and models, covering various fields of AI and general-purpose computing, to suppo…☆64Updated last week
- DashInfer is a native LLM inference engine aiming to deliver industry-leading performance atop various hardware architectures, including …☆256Updated 3 weeks ago
- optimized BERT transformer inference on NVIDIA GPU. https://arxiv.org/abs/2210.03052☆474Updated last year
- Low-bit LLM inference on CPU/NPU with lookup table☆811Updated 3 weeks ago
- C++ implementation of Qwen-LM☆595Updated 6 months ago
- ☆45Updated last year
- export llama to onnx☆127Updated 6 months ago
- ☆139Updated last year
- ☆146Updated 6 months ago
- stable diffusion using mnn☆65Updated last year
- cudnn_frontend provides a c++ wrapper for the cudnn backend API and samples on how to use it☆582Updated 2 weeks ago
- Machine learning compiler based on MLIR for Sophgo TPU.☆740Updated this week
- FlagScale is a large model toolkit based on open-sourced projects.☆307Updated this week
- PyTorch Neural Network eXchange☆594Updated 2 weeks ago
- LLaMa/RWKV onnx models, quantization and testcase☆363Updated last year
- ☆168Updated this week
- RTP-LLM: Alibaba's high-performance LLM inference engine for diverse applications.☆801Updated 3 weeks ago
- MegCC是一个运行时超轻量,高效,移植简单的深度学习模型编译器☆484Updated 8 months ago
- [EMNLP 2024 Industry Track] This is the official PyTorch implementation of "LLMC: Benchmarking Large Language Model Quantization with a V…☆492Updated this week
- AI Accelerator Benchmark focuses on evaluating AI Accelerators from a practical production perspective, including the ease of use and ver…☆247Updated 2 weeks ago
- Run generative AI models in sophgo BM1684X/BM1688☆220Updated this week