Fast and memory-efficient exact attention
☆224Mar 19, 2026Updated this week
Alternatives and similar repositories for flash-attention
Users that are interested in flash-attention are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- 8-bit CUDA functions for PyTorch☆70Sep 24, 2025Updated 5 months ago
- ☆64Updated this week
- Fast and memory-efficient exact attention ported to rocm☆13Dec 1, 2023Updated 2 years ago
- [DEPRECATED] Moved to ROCm/rocm-libraries repo. NOTE: develop branch is maintained as a read-only mirror☆525Updated this week
- Development repository for the Triton language and compiler☆143Updated this week
- A PyTorch Extension: Tools for easy mixed precision and distributed training in Pytorch☆25Updated this week
- 8-bit CUDA functions for PyTorch Rocm compatible☆41Mar 26, 2024Updated last year
- Ongoing research training transformer models at scale☆39Updated this week
- Ahead of Time (AOT) Triton Math Library☆94Mar 16, 2026Updated last week
- [DEPRECATED] Moved to ROCm/rocm-libraries repo☆113Updated this week
- Hackable and optimized Transformers building blocks, supporting a composable construction.☆34Feb 26, 2026Updated 3 weeks ago
- AI Tensor Engine for ROCm☆385Updated this week
- A high-throughput and memory-efficient inference and serving engine for LLMs☆117Updated this week
- ☆72Updated this week
- 8-bit CUDA functions for PyTorch, ported to HIP for use in AMD GPUs☆53Apr 9, 2023Updated 2 years ago
- [DEPRECATED] Moved to ROCm/rocm-libraries repo☆139Mar 13, 2026Updated last week
- Standalone Flash Attention v2 kernel without libtorch dependency☆112Sep 10, 2024Updated last year
- [DEPRECATED] Moved to ROCm/rocm-systems repo☆84Feb 11, 2026Updated last month
- [DEPRECATED] Moved to ROCm/rocm-libraries repo☆257Updated this week
- vLLM: A high-throughput and memory-efficient inference and serving engine for LLMs☆94Updated this week
- [DEPRECATED] Moved to ROCm/rocm-systems repo☆413Updated this week
- BitBLAS is a library to support mixed-precision matrix multiplications, especially for quantized LLM deployment.☆752Aug 6, 2025Updated 7 months ago
- AMD SMI☆119Updated this week
- The AMD rocAL is designed to efficiently decode and process images and videos from a variety of storage formats and modify them through a…☆23Updated this week
- ☆30Mar 2, 2026Updated 3 weeks ago
- Row-wise block scaling for fp8 quantization matrix multiplication. Solution to GPU mode AMD challenge.☆18Feb 9, 2026Updated last month
- AMD's graph optimization engine.☆284Updated this week
- a simple Flash Attention v2 implementation with ROCM (RDNA3 GPU, roc wmma), mainly used for stable diffusion(ComfyUI) in Windows ZLUDA en…☆51Aug 25, 2024Updated last year
- ☆172Updated this week
- ☆16Nov 11, 2025Updated 4 months ago
- Simple monkeypatch to boost AMD Navi 3 GPUs☆48Apr 21, 2025Updated 11 months ago
- MAD (Model Automation and Dashboarding)☆32Updated this week
- Python package for rematerialization-aware gradient checkpointing☆27Oct 31, 2023Updated 2 years ago
- [DEPRECATED] Moved to ROCm/rocm-libraries repo☆26Mar 11, 2026Updated last week
- ☆11Jun 29, 2021Updated 4 years ago
- [DEPRECATED] Moved to ROCm/rocm-systems repo☆153Jan 21, 2026Updated 2 months ago
- A PyTorch native platform for training generative AI models☆16Nov 18, 2025Updated 4 months ago
- ☆24Mar 5, 2026Updated 2 weeks ago
- A Triton-only attention backend for vLLM☆24Feb 11, 2026Updated last month