AuleTechnologies / Aule-Attention
View external linksLinks

High-performance FlashAttention-2 for AMD, Intel, and Apple GPUs. Drop-in replacement for PyTorch SDPA. Triton backend for ROCm (MI300X, RDNA3), Vulkan backend for consumer GPUs. No CUDA required.
147Jan 27, 2026Updated 2 weeks ago

Alternatives and similar repositories for Aule-Attention

Users that are interested in Aule-Attention are comparing it to the libraries listed below

Sorting:

Are these results useful?