thu-ml / SageAttention

Quantized Attention achieves speedup of 2-3x and 3-5x compared to FlashAttention and xformers, without lossing end-to-end metrics across language, image, and video models.
1,469Updated last week

Alternatives and similar repositories for SageAttention

Users that are interested in SageAttention are comparing it to the libraries listed below

Sorting: