Cambricon / torch_mlu
☆14Updated last week
Related projects ⓘ
Alternatives and complementary repositories for torch_mlu
- Development repository for the Triton-Linalg conversion☆151Updated last month
- ☆138Updated 2 weeks ago
- A standalone GEMM kernel for fp16 activation and quantized weight, extracted from FasterTransformer☆85Updated 8 months ago
- ☆79Updated 8 months ago
- ☆79Updated last year
- An unofficial cuda assembler, for all generations of SASS, hopefully :)☆78Updated last year
- ☆140Updated 7 months ago
- ☆32Updated last month
- code reading for tvm☆71Updated 2 years ago
- ☆110Updated 2 years ago
- play gemm with tvm☆84Updated last year
- ☆103Updated 7 months ago
- Yinghan's Code Sample☆289Updated 2 years ago
- examples for tvm schedule API☆97Updated last year
- ☆100Updated 8 months ago
- A home for the final text of all TVM RFCs.☆101Updated 2 months ago
- ☆93Updated 3 years ago
- ☆29Updated last year
- Efficient operation implementation based on the Cambricon Machine Learning Unit (MLU) .☆103Updated this week
- ☆79Updated 2 months ago
- Triton Compiler related materials.☆29Updated 3 weeks ago
- ☆57Updated this week
- ☆52Updated 2 years ago
- A benchmark suited especially for deep learning operators☆41Updated last year
- TiledCUDA is a highly efficient kernel template library designed to elevate CUDA C’s level of abstraction for processing tiles.☆157Updated this week
- ☆35Updated 2 years ago
- ☆17Updated 7 months ago
- Examples of CUDA implementations by Cutlass CuTe☆101Updated last week
- Performance of the C++ interface of flash attention and flash attention v2 in large language model (LLM) inference scenarios.☆29Updated 2 months ago
- 使用 cutlass 仓库在 ada 架构上实现 fp8 的 flash attention☆52Updated 3 months ago