TorchRWKV / flash-linear-attentionLinks

Efficient implementations of state-of-the-art linear attention models in Pytorch and Triton
29Updated last week

Alternatives and similar repositories for flash-linear-attention

Users that are interested in flash-linear-attention are comparing it to the libraries listed below

Sorting: