chengzeyi / ParaAttention

https://wavespeed.ai/ Context parallel attention that accelerates DiT model inference with dynamic caching
264Updated this week

Alternatives and similar repositories for ParaAttention:

Users that are interested in ParaAttention are comparing it to the libraries listed below