ShaYeBuHui01 / flash_attention_inference

Performance of the C++ interface of flash attention and flash attention v2 in large language model (LLM) inference scenarios.
14Updated last year

Alternatives and similar repositories for flash_attention_inference:

Users that are interested in flash_attention_inference are comparing it to the libraries listed below