ShaYeBuHui01 / flash_attention_inference

Performance of the C++ interface of flash attention and flash attention v2 in large language model (LLM) inference scenarios.
14Updated last year

Related projects

Alternatives and complementary repositories for flash_attention_inference