TorchRWKV / flash-linear-attention

Efficient implementations of state-of-the-art linear attention models in Pytorch and Triton
15Updated this week

Alternatives and similar repositories for flash-linear-attention:

Users that are interested in flash-linear-attention are comparing it to the libraries listed below