AuleTechnologies / Aule-AttentionView on GitHub
High-performance FlashAttention-2 for AMD, Intel, and Apple GPUs. Drop-in replacement for PyTorch SDPA. Triton backend for ROCm (MI300X, RDNA3), Vulkan backend for consumer GPUs. No CUDA required.
148Jan 27, 2026Updated last month

Alternatives and similar repositories for Aule-Attention

Users that are interested in Aule-Attention are comparing it to the libraries listed below

Sorting:

Are these results useful?