thu-ml / SageAttention

Quantized Attention that achieves speedups of 2.1-3.1x and 2.7-5.1x compared to FlashAttention2 and xformers, respectively, without lossing end-to-end metrics across various models.
900Updated this week

Alternatives and similar repositories for SageAttention:

Users that are interested in SageAttention are comparing it to the libraries listed below