torch_musa is an open source repository based on PyTorch, which can make full use of the super computing power of MooreThreads graphics cards.
☆492Mar 17, 2026Updated last month
Alternatives and similar repositories for torch_musa
Users that are interested in torch_musa are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- A high-throughput and memory-efficient inference and serving engine for LLMs☆94Updated this week
- a static analytical model for LLM distributed training☆131Jan 8, 2026Updated 3 months ago
- MUSA Templates for Linear Algebra Subroutines☆45Jan 30, 2026Updated 3 months ago
- Get up and running with OpenAI gpt-oss, DeepSeek-R1, Gemma 3 and other models on MTGPU.☆35Oct 13, 2025Updated 6 months ago
- An adapter layer that ensures torch_musa🔦 delivers a CUDA-compatible PyTorch experience.☆34Updated this week
- GPU virtual machines on DigitalOcean Gradient AI • AdGet to production fast with high-performance AMD and NVIDIA GPUs you can spin up in seconds. The definition of operational simplicity.
- ☆16Mar 30, 2024Updated 2 years ago
- FlagGems is an operator library for large language models implemented in the Triton Language.☆972Updated this week
- Ascend PyTorch adapter (torch_npu). Mirror of https://gitcode.com/Ascend/pytorch☆510Updated this week
- Examine and discover LoongArch instructions☆23Apr 14, 2026Updated 2 weeks ago
- ☆21Jan 17, 2026Updated 3 months ago
- ☆54Mar 15, 2025Updated last year
- GPGPU processor supporting RISCV-V extension, developed with Chisel HDL☆895Updated this week
- Dissecting NVIDIA GPU Architecture☆121Jul 11, 2022Updated 3 years ago
- Development repository for the Triton language and compiler☆19,087Updated this week
- Wordpress hosting with auto-scaling - Free Trial Offer • AdFully Managed hosting for WordPress and WooCommerce businesses that need reliable, auto-scalable performance. Cloudways SafeUpdates now available.
- An unofficial cuda assembler, for all generations of SASS, hopefully :)☆582Apr 20, 2023Updated 3 years ago
- ☆15May 8, 2025Updated 11 months ago
- Open Source Computer Vision Library☆21Sep 25, 2024Updated last year
- ☆14Updated this week
- chipStar is a tool for compiling and running HIP/CUDA on SPIR-V via OpenCL or Level Zero APIs.☆326Updated this week
- ☆165Dec 27, 2024Updated last year
- study of cutlass☆22Nov 10, 2024Updated last year
- Domain-specific language designed to streamline the development of high-performance GPU/CPU/Accelerators kernels☆5,928Updated this week
- ☆47Dec 13, 2024Updated last year
- End-to-end encrypted email - Proton Mail • AdSpecial offer: 40% Off Yearly / 80% Off First Month. All Proton services are open source and independently audited for security.
- A model compilation solution for various hardware☆468Aug 20, 2025Updated 8 months ago
- ☆73May 29, 2019Updated 6 years ago
- Library for modelling performance costs of different Neural Network workloads on NPU devices☆34Mar 24, 2026Updated last month
- [HPCA 2026] AI Accelerator Benchmark focuses on evaluating AI Accelerators from a practical production perspective, including the ease of…☆340Apr 22, 2026Updated last week
- A library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit and 4-bit floating point (FP8 and FP4) precision on H…☆3,312Updated this week
- A fast communication-overlapping library for tensor/expert parallelism on GPUs.☆1,295Aug 28, 2025Updated 8 months ago
- Triton for OpenCL backend, and use mlir-translate to get source OpenCL code☆27Aug 27, 2025Updated 8 months ago
- A Python package for extending the official PyTorch that can easily obtain performance on Intel platform☆2,011Mar 30, 2026Updated last month
- DeepSeek-V3/R1 inference performance simulator☆195Mar 27, 2025Updated last year
- Managed hosting for WordPress and PHP on Cloudways • AdManaged hosting for WordPress, Magento, Laravel, or PHP apps, on multiple cloud providers. Deploy in minutes on Cloudways by DigitalOcean.
- Xiao's CUDA Optimization Guide [NO LONGER ADDING NEW CONTENT]☆325Nov 8, 2022Updated 3 years ago
- ☢️ TensorRT 2023复赛——基于TensorRT-LLM的Llama模型推断加速优化☆52Oct 20, 2023Updated 2 years ago
- Optimized primitives for collective multi-GPU communication☆4,656Updated this week
- A high-throughput and memory-efficient inference and serving engine for LLMs☆17Jun 3, 2024Updated last year
- Transformer related optimization, including BERT, GPT☆6,412Mar 27, 2024Updated 2 years ago
- Yinghan's Code Sample☆364Jul 25, 2022Updated 3 years ago
- CUDA Templates and Python DSLs for High-Performance Linear Algebra☆9,638Apr 25, 2026Updated last week