Implementation of FlashAttention (FA1-FA4) in PyTorch for educational and algorithmic clarity
☆209Apr 12, 2026Updated 3 weeks ago
Alternatives and similar repositories for FlashAttention-PyTorch
Users that are interested in FlashAttention-PyTorch are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Get down and dirty with FlashAttention2.0 in pytorch, plug in and play no complex CUDA kernels☆114Jul 31, 2023Updated 2 years ago
- flash attention tutorial written in python, triton, cuda, cutlass☆508Jan 20, 2026Updated 3 months ago
- An approximate implementation of the OpenAI paper - An Empirical Model of Large-Batch Training for MNIST☆11Nov 19, 2022Updated 3 years ago
- Flash Attention in ~100 lines of CUDA (forward pass only)☆1,128Dec 30, 2024Updated last year
- a minimal cache manager for PagedAttention, on top of llama3.☆142Aug 26, 2024Updated last year
- Deploy on Railway without the complexity - Free Credits Offer • AdConnect your repo and Railway handles the rest with instant previews. Quickly provision container image services, databases, and storage volumes.
- Implement Flash Attention using Cute.☆106Dec 17, 2024Updated last year
- ☆16Mar 13, 2023Updated 3 years ago
- Prune transformer layers☆74May 30, 2024Updated last year
- The simplest implementation of recent Sparse Attention patterns for efficient LLM inference.☆92Jul 17, 2025Updated 9 months ago
- ☆27Aug 5, 2022Updated 3 years ago
- In this repository I have a code and brief explanations of the attempts that I made at the ARC-AGI (2024) challenges :)☆26Nov 11, 2024Updated last year
- triton ver of gqa flash attn, based on the tutorial☆12Aug 4, 2024Updated last year
- Fast inference from large lauguage models via speculative decoding☆914Aug 22, 2024Updated last year
- Code for Blog Post: Can Better Cold-Start Strategies Improve RL Training for LLMs?☆20Mar 9, 2025Updated last year
- Deploy on Railway without the complexity - Free Credits Offer • AdConnect your repo and Railway handles the rest with instant previews. Quickly provision container image services, databases, and storage volumes.
- Port linux kernel list.h to userspace☆32Mar 18, 2015Updated 11 years ago
- A re-implementation of the "Red Teaming Language Models with Language Models" paper by Perez et al., 2022☆34Oct 9, 2023Updated 2 years ago
- [ICLR 2025 & COLM 2025] Official PyTorch implementation of the Forgetting Transformer and Adaptive Computation Pruning☆150Feb 25, 2026Updated 2 months ago
- ☆16Nov 14, 2022Updated 3 years ago
- ☆52May 19, 2025Updated 11 months ago
- Fast and memory-efficient exact attention☆23,628May 3, 2026Updated last week
- A PyTorch Dataset for Slakh2100☆10Feb 14, 2024Updated 2 years ago
- simplest online-softmax notebook for explain Flash Attention☆16Jan 27, 2026Updated 3 months ago
- ☆35Dec 22, 2025Updated 4 months ago
- Serverless GPU API endpoints on Runpod - Get Bonus Credits • AdSkip the infrastructure headaches. Auto-scaling, pay-as-you-go, no-ops approach lets you focus on innovating your application.
- The official repo/implementation of the paper "Training a Singing Transcription Model Using Connectionist Temporal Classification Loss an…☆12Mar 25, 2025Updated last year
- [NeurIPS 2023 spotlight] Official implementation of HGRN in our NeurIPS 2023 paper - Hierarchically Gated Recurrent Neural Network for Se…☆68Apr 24, 2024Updated 2 years ago
- 🚀 Efficient implementations for emerging model architectures☆5,032May 1, 2026Updated last week
- High Performance FP8 GEMM Kernels for SM89 and later GPUs.☆21Jan 24, 2025Updated last year
- Implementation for POET and POET-X for LLM pretraining☆28Mar 12, 2026Updated last month
- The evaluation framework for training-free sparse attention in LLMs☆122Jan 27, 2026Updated 3 months ago
- Standalone Flash Attention v2 kernel without libtorch dependency☆113Sep 10, 2024Updated last year
- Puzzles for learning Triton☆2,421Apr 1, 2026Updated last month
- Codes and data for ACL 2023 Findings paper "Click: Controllable Text Generation with Sequence Likelihood Contrastive Learning"☆19Feb 26, 2024Updated 2 years ago
- Deploy on Railway without the complexity - Free Credits Offer • AdConnect your repo and Railway handles the rest with instant previews. Quickly provision container image services, databases, and storage volumes.
- Using FlexAttention to compute attention with different masking patterns☆47Sep 22, 2024Updated last year
- The official repository for SkyLadder: Better and Faster Pretraining via Context Window Scheduling☆42Dec 29, 2025Updated 4 months ago
- ☆40Dec 14, 2025Updated 4 months ago
- [ICML 2025] From Low Rank Gradient Subspace Stabilization to Low-Rank Weights: Observations, Theories and Applications☆52Oct 30, 2025Updated 6 months ago
- Use the tokenizer in parallel to achieve superior acceleration☆20Mar 21, 2024Updated 2 years ago
- GEMM by WMMA (tensor core)☆15Jul 31, 2022Updated 3 years ago
- [EMNLP 2022] Official implementation of Transnormer in our EMNLP 2022 paper - The Devil in Linear Transformer☆64Jul 30, 2023Updated 2 years ago