Implementation of FlashAttention in PyTorch
☆182Jan 12, 2025Updated last year
Alternatives and similar repositories for FlashAttention-PyTorch
Users that are interested in FlashAttention-PyTorch are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Get down and dirty with FlashAttention2.0 in pytorch, plug in and play no complex CUDA kernels☆113Jul 31, 2023Updated 2 years ago
- flash attention tutorial written in python, triton, cuda, cutlass☆494Jan 20, 2026Updated 2 months ago
- An approximate implementation of the OpenAI paper - An Empirical Model of Large-Batch Training for MNIST☆11Nov 19, 2022Updated 3 years ago
- Flash Attention in ~100 lines of CUDA (forward pass only)☆1,098Dec 30, 2024Updated last year
- a minimal cache manager for PagedAttention, on top of llama3.☆141Aug 26, 2024Updated last year
- DigitalOcean Gradient AI Platform • AdBuild production-ready AI agents using customizable tools or access multiple LLMs through a single endpoint. Create custom knowledge bases or connect external data.
- Implement Flash Attention using Cute.☆103Dec 17, 2024Updated last year
- ☆16Mar 13, 2023Updated 3 years ago
- Prune transformer layers☆74May 30, 2024Updated last year
- Triton implementation of FlashAttention2 that adds Custom Masks.☆170Aug 14, 2024Updated last year
- The simplest implementation of recent Sparse Attention patterns for efficient LLM inference.☆91Jul 17, 2025Updated 8 months ago
- ☆27Aug 5, 2022Updated 3 years ago
- Persistent dense gemm for Hopper in `CuTeDSL`☆15Aug 9, 2025Updated 7 months ago
- Implementation of the paper "Opcodes as predictor for malware " by Daniel Bilar☆11Oct 17, 2020Updated 5 years ago
- triton ver of gqa flash attn, based on the tutorial☆12Aug 4, 2024Updated last year
- Managed Database hosting by DigitalOcean • AdPostgreSQL, MySQL, MongoDB, Kafka, Valkey, and OpenSearch available. Automatically scale up storage and focus on building your apps.
- Fast inference from large lauguage models via speculative decoding☆904Aug 22, 2024Updated last year
- [ACL 2025] Squeezed Attention: Accelerating Long Prompt LLM Inference☆58Nov 20, 2024Updated last year
- Code for Blog Post: Can Better Cold-Start Strategies Improve RL Training for LLMs?☆20Mar 9, 2025Updated last year
- [ICLR 2025 & COLM 2025] Official PyTorch implementation of the Forgetting Transformer and Adaptive Computation Pruning☆146Feb 25, 2026Updated last month
- A re-implementation of the "Red Teaming Language Models with Language Models" paper by Perez et al., 2022☆34Oct 9, 2023Updated 2 years ago
- ☆16Nov 14, 2022Updated 3 years ago
- ☆52May 19, 2025Updated 10 months ago
- Fast and memory-efficient exact attention☆22,938Mar 23, 2026Updated last week
- ☆35Dec 22, 2025Updated 3 months ago
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click and start building anything your business needs.
- ☆46May 24, 2025Updated 10 months ago
- [NeurIPS 2023 spotlight] Official implementation of HGRN in our NeurIPS 2023 paper - Hierarchically Gated Recurrent Neural Network for Se…☆68Apr 24, 2024Updated last year
- Official Implementation for the ICLR2023 paper "Fuzzy Alignments in Directed Acyclic Graph for Non-autoregressive Machine Translation"☆14Mar 1, 2023Updated 3 years ago
- The evaluation framework for training-free sparse attention in LLMs☆122Jan 27, 2026Updated 2 months ago
- Puzzles for learning Triton☆2,348Mar 18, 2026Updated last week
- Standalone Flash Attention v2 kernel without libtorch dependency☆113Sep 10, 2024Updated last year
- Codes and data for ACL 2023 Findings paper "Click: Controllable Text Generation with Sequence Likelihood Contrastive Learning"☆19Feb 26, 2024Updated 2 years ago
- Code for the paper: https://arxiv.org/pdf/2309.06979.pdf☆21Jul 29, 2024Updated last year
- Using FlexAttention to compute attention with different masking patterns☆47Sep 22, 2024Updated last year
- End-to-end encrypted email - Proton Mail • AdSpecial offer: 40% Off Yearly / 80% Off First Month. All Proton services are open source and independently audited for security.
- Llama causal LM fully recreated in LibTorch. Designed to be used in Unreal Engine 5☆16Sep 19, 2024Updated last year
- 🚀 Efficient implementations of state-of-the-art linear attention models☆4,692Updated this week
- [ICLRW'26] EoRA: Fine-tuning-free Compensation for Compressed LLM with Eigenspace Low-Rank Approximation☆30Updated this week
- Flash Attention in ~100 lines of CUDA (forward pass only)☆11Jun 10, 2024Updated last year
- ☆39Dec 14, 2025Updated 3 months ago
- Use the tokenizer in parallel to achieve superior acceleration☆20Mar 21, 2024Updated 2 years ago
- GEMM by WMMA (tensor core)☆15Jul 31, 2022Updated 3 years ago