Ascend / torchair
☆11Updated this week
Alternatives and similar repositories for torchair:
Users that are interested in torchair are comparing it to the libraries listed below
- ☆57Updated last week
- ⚡️Write HGEMM from scratch using Tensor Cores with WMMA, MMA and CuTe API, Achieve Peak⚡️ Performance.☆74Updated last month
- ☆84Updated last month
- Compare different hardware platforms via the Roofline Model for LLM inference tasks.☆99Updated last year
- A standalone GEMM kernel for fp16 activation and quantized weight, extracted from FasterTransformer☆91Updated this week
- A lightweight design for computation-communication overlap.☆35Updated last week
- PyTorch distributed training acceleration framework☆48Updated 2 months ago
- ☆93Updated 7 months ago
- A practical way of learning Swizzle☆19Updated 3 months ago
- Standalone Flash Attention v2 kernel without libtorch dependency☆108Updated 7 months ago
- A prefill & decode disaggregated LLM serving framework with shared GPU memory and fine-grained compute isolation.☆64Updated this week
- A simple calculation for LLM MFU.☆37Updated 2 months ago
- TileFusion is an experimental C++ macro kernel template library that elevates the abstraction level in CUDA C for tile processing.☆84Updated this week
- DeeperGEMM: crazy optimized version☆68Updated this week
- We invite you to visit and follow our new repository at https://github.com/microsoft/TileFusion. TiledCUDA is a highly efficient kernel …☆181Updated 3 months ago
- ☆66Updated 2 weeks ago
- Automated Parallelization System and Infrastructure for Multiple Ecosystems☆78Updated 5 months ago
- ☆39Updated 11 months ago
- Summary of system papers/frameworks/codes/tools on training or serving large model☆56Updated last year
- GPTQ inference TVM kernel☆38Updated last year
- Implement Flash Attention using Cute.☆78Updated 4 months ago
- [USENIX ATC '24] Accelerating the Training of Large Language Models using Efficient Activation Rematerialization and Optimal Hybrid Paral…☆53Updated 9 months ago
- play gemm with tvm☆90Updated last year
- 使用 cutlass 仓库在 ada 架构上实现 fp8 的 flash attention☆64Updated 8 months ago
- Multiple GEMM operators are constructed with cutlass to support LLM inference.☆18Updated 7 months ago
- ☆28Updated 3 months ago
- ☆148Updated 3 months ago
- ☆19Updated 7 months ago
- Fast and memory-efficient exact attention☆68Updated last week
- An extention of TVMScript to write simple and high performance GPU kernels with tensorcore.☆50Updated 9 months ago