Cambricon / torch_mlu
☆24Updated last month
Alternatives and similar repositories for torch_mlu:
Users that are interested in torch_mlu are comparing it to the libraries listed below
- ☆148Updated 3 months ago
- ☆139Updated last year
- Development repository for the Triton-Linalg conversion☆185Updated 2 months ago
- ☆115Updated 4 months ago
- ☆58Updated 5 months ago
- Examples of CUDA implementations by Cutlass CuTe☆159Updated 2 months ago
- A standalone GEMM kernel for fp16 activation and quantized weight, extracted from FasterTransformer☆91Updated 3 weeks ago
- ☆36Updated 6 months ago
- ☆49Updated this week
- Optimize GEMM with tensorcore step by step☆25Updated last year
- ☆138Updated 4 months ago
- ☆60Updated this week
- Yinghan's Code Sample☆323Updated 2 years ago
- ☆88Updated 3 weeks ago
- ☆198Updated 9 months ago
- ☆102Updated last month
- ⚡️Write HGEMM from scratch using Tensor Cores with WMMA, MMA and CuTe API, Achieve Peak⚡️ Performance.☆73Updated 3 weeks ago
- We invite you to visit and follow our new repository at https://github.com/microsoft/TileFusion. TiledCUDA is a highly efficient kernel …☆181Updated 2 months ago
- This project is about convolution operator optimization on GPU, include GEMM based (Implicit GEMM) convolution.☆29Updated 3 months ago
- ☆92Updated 7 months ago
- 使用 cutlass 仓库在 ada 架构上实现 fp8 的 flash attention☆63Updated 8 months ago
- ☆30Updated last year
- ☆122Updated last year
- Optimize softmax in triton in many cases☆20Updated 7 months ago
- Shared Middle-Layer for Triton Compilation☆246Updated last week
- ☆61Updated 3 months ago
- ☆28Updated 2 months ago
- Standalone Flash Attention v2 kernel without libtorch dependency☆108Updated 7 months ago
- ☆68Updated 3 months ago
- AI Accelerator Benchmark focuses on evaluating AI Accelerators from a practical production perspective, including the ease of use and ver…☆236Updated last week