Fast and memory-efficient exact attention
☆221Feb 26, 2026Updated this week
Alternatives and similar repositories for flash-attention
Users that are interested in flash-attention are comparing it to the libraries listed below
Sorting:
- 8-bit CUDA functions for PyTorch☆70Sep 24, 2025Updated 5 months ago
- ☆65Updated this week
- Fast and memory-efficient exact attention ported to rocm☆13Dec 1, 2023Updated 2 years ago
- Development repository for the Triton language and compiler☆141Updated this week
- [DEPRECATED] Moved to ROCm/rocm-libraries repo. NOTE: develop branch is maintained as a read-only mirror☆523Updated this week
- Hackable and optimized Transformers building blocks, supporting a composable construction.☆34Feb 24, 2026Updated last week
- Ongoing research training transformer models at scale☆37Updated this week
- [DEPRECATED] Moved to ROCm/rocm-libraries repo☆113Updated this week
- Ahead of Time (AOT) Triton Math Library☆92Updated this week
- 8-bit CUDA functions for PyTorch Rocm compatible☆41Mar 26, 2024Updated last year
- A PyTorch Extension: Tools for easy mixed precision and distributed training in Pytorch☆25Updated this week
- [DEPRECATED] Moved to ROCm/rocm-libraries repo☆139Updated this week
- A high-throughput and memory-efficient inference and serving engine for LLMs☆114Updated this week
- [DEPRECATED] Moved to ROCm/rocm-systems repo☆84Feb 11, 2026Updated 2 weeks ago
- ☆71Updated this week
- vLLM: A high-throughput and memory-efficient inference and serving engine for LLMs☆94Updated this week
- AI Tensor Engine for ROCm☆360Updated this week
- The AMD rocAL is designed to efficiently decode and process images and videos from a variety of storage formats and modify them through a…☆23Updated this week
- Standalone Flash Attention v2 kernel without libtorch dependency☆114Sep 10, 2024Updated last year
- BitBLAS is a library to support mixed-precision matrix multiplications, especially for quantized LLM deployment.☆751Aug 6, 2025Updated 6 months ago
- AMD SMI☆116Feb 20, 2026Updated last week
- [DEPRECATED] Moved to ROCm/rocm-libraries repo☆256Updated this week
- ☆169Updated this week
- ☆30Updated this week
- [DEPRECATED] Moved to ROCm/rocm-systems repo☆411Feb 23, 2026Updated last week
- Row-wise block scaling for fp8 quantization matrix multiplication. Solution to GPU mode AMD challenge.☆17Feb 9, 2026Updated 3 weeks ago
- ☆11Jun 29, 2021Updated 4 years ago
- AMD's graph optimization engine.☆280Updated this week
- ☆23Feb 24, 2026Updated last week
- ☆38Updated this week
- a simple Flash Attention v2 implementation with ROCM (RDNA3 GPU, roc wmma), mainly used for stable diffusion(ComfyUI) in Windows ZLUDA en…☆51Aug 25, 2024Updated last year
- [DEPRECATED] Moved to ROCm/rocm-systems repo☆165Feb 16, 2026Updated 2 weeks ago
- MAD (Model Automation and Dashboarding)☆31Updated this week
- [DEPRECATED] Moved to ROCm/rocm-libraries repo☆26Jan 21, 2026Updated last month
- ☆36Updated this week
- ☆17Dec 19, 2024Updated last year
- ☆17Nov 11, 2025Updated 3 months ago
- ☆157Updated this week
- [DEPRECATED] Moved to ROCm/rocm-systems repo☆86Feb 11, 2026Updated 2 weeks ago