Fast and memory-efficient exact attention
☆75Mar 3, 2025Updated last year
Alternatives and similar repositories for flex_head_fa
Users that are interested in flex_head_fa are comparing it to the libraries listed below
Sorting:
- Open deep learning compiler stack for cpu, gpu and specialized accelerators☆19Feb 24, 2026Updated last week
- ☆118May 19, 2025Updated 9 months ago
- ☆20Sep 28, 2024Updated last year
- ☆52May 19, 2025Updated 9 months ago
- Using FlexAttention to compute attention with different masking patterns☆47Sep 22, 2024Updated last year
- Implement Flash Attention using Cute.☆102Dec 17, 2024Updated last year
- ☆36Feb 26, 2024Updated 2 years ago
- Code repo for "CritiPrefill: A Segment-wise Criticality-based Approach for Prefilling Acceleration in LLMs".☆16Sep 15, 2024Updated last year
- nnScaler: Compiling DNN models for Parallel Training☆124Sep 23, 2025Updated 5 months ago
- An extention of TVMScript to write simple and high performance GPU kernels with tensorcore.☆50Jul 23, 2024Updated last year
- PyTorch compilation tutorial covering TorchScript, torch.fx, and Slapo☆17Mar 13, 2023Updated 2 years ago
- Stick-breaking attention☆62Jul 1, 2025Updated 8 months ago
- TileFusion is an experimental C++ macro kernel template library that elevates the abstraction level in CUDA C for tile processing.☆107Jun 28, 2025Updated 8 months ago
- 16-fold memory access reduction with nearly no loss☆108Mar 26, 2025Updated 11 months ago
- ☆12Apr 19, 2024Updated last year
- ☆16Nov 26, 2024Updated last year
- ☆13Dec 9, 2024Updated last year
- [ICML 2024] Quest: Query-Aware Sparsity for Efficient Long-Context LLM Inference☆374Jul 10, 2025Updated 7 months ago
- KV cache compression for high-throughput LLM inference☆154Feb 5, 2025Updated last year
- ☆11Nov 16, 2019Updated 6 years ago
- ☆24May 9, 2025Updated 10 months ago
- ☆13Jun 26, 2024Updated last year
- Official PyTorch Implementation of the Longhorn Deep State Space Model☆56Dec 4, 2024Updated last year
- A sparse attention kernel supporting mix sparse patterns☆472Jan 18, 2026Updated last month
- ☆33May 15, 2024Updated last year
- [ACL 2024] A novel QAT with Self-Distillation framework to enhance ultra low-bit LLMs.☆133May 16, 2024Updated last year
- Flash-Muon: An Efficient Implementation of Muon Optimizer☆239Jun 15, 2025Updated 8 months ago
- Helpful tools and examples for working with flex-attention☆1,140Feb 8, 2026Updated last month
- Some preliminary explorations of Mamba's context scaling.☆218Feb 8, 2024Updated 2 years ago
- An Attention Superoptimizer☆22Jan 20, 2025Updated last year
- Source code for the paper "Positional Attention: Expressivity and Learnability of Algorithmic Computation"☆14May 26, 2025Updated 9 months ago
- My tests and experiments with some popular dl frameworks.☆17Sep 11, 2025Updated 5 months ago
- Benchmark tests supporting the TiledCUDA library.☆18Nov 19, 2024Updated last year
- 🚀 Efficient implementations of state-of-the-art linear attention models☆4,474Updated this week
- ☆88Updated this week
- We invite you to visit and follow our new repository at https://github.com/microsoft/TileFusion. TiledCUDA is a highly efficient kernel …☆194Jan 28, 2025Updated last year
- Triton implementation of FlashAttention2 that adds Custom Masks.☆169Aug 14, 2024Updated last year
- Noisy language compiler☆17Jul 31, 2024Updated last year
- ☆16Oct 20, 2025Updated 4 months ago