xiayuqing0622 / flex_head_faView external linksLinks
Fast and memory-efficient exact attention
☆75Mar 3, 2025Updated 11 months ago
Alternatives and similar repositories for flex_head_fa
Users that are interested in flex_head_fa are comparing it to the libraries listed below
Sorting:
- Open deep learning compiler stack for cpu, gpu and specialized accelerators☆19Updated this week
- ☆118May 19, 2025Updated 8 months ago
- ☆20Sep 28, 2024Updated last year
- ☆52May 19, 2025Updated 8 months ago
- Using FlexAttention to compute attention with different masking patterns☆47Sep 22, 2024Updated last year
- Implement Flash Attention using Cute.☆100Dec 17, 2024Updated last year
- ☆35Feb 26, 2024Updated last year
- Code repo for "CritiPrefill: A Segment-wise Criticality-based Approach for Prefilling Acceleration in LLMs".☆16Sep 15, 2024Updated last year
- nnScaler: Compiling DNN models for Parallel Training☆124Sep 23, 2025Updated 4 months ago
- An extention of TVMScript to write simple and high performance GPU kernels with tensorcore.☆51Jul 23, 2024Updated last year
- PyTorch compilation tutorial covering TorchScript, torch.fx, and Slapo☆17Mar 13, 2023Updated 2 years ago
- Stick-breaking attention☆62Jul 1, 2025Updated 7 months ago
- TileFusion is an experimental C++ macro kernel template library that elevates the abstraction level in CUDA C for tile processing.☆106Jun 28, 2025Updated 7 months ago
- A search index specialised for LaTeX equations. Developed for latexsearch.com.☆17Jul 15, 2011Updated 14 years ago
- 16-fold memory access reduction with nearly no loss☆110Mar 26, 2025Updated 10 months ago
- ☆12Apr 19, 2024Updated last year
- ☆16Nov 26, 2024Updated last year
- ☆13Dec 9, 2024Updated last year
- [ICML 2024] Quest: Query-Aware Sparsity for Efficient Long-Context LLM Inference☆372Jul 10, 2025Updated 7 months ago
- KV cache compression for high-throughput LLM inference☆153Feb 5, 2025Updated last year
- ☆13Jun 26, 2024Updated last year
- ☆11Nov 16, 2019Updated 6 years ago
- ☆24May 9, 2025Updated 9 months ago
- A sparse attention kernel supporting mix sparse patterns☆455Jan 18, 2026Updated 3 weeks ago
- Official PyTorch Implementation of the Longhorn Deep State Space Model☆56Dec 4, 2024Updated last year
- ☆33May 15, 2024Updated last year
- Flash-Muon: An Efficient Implementation of Muon Optimizer☆235Jun 15, 2025Updated 8 months ago
- Helpful tools and examples for working with flex-attention☆1,127Feb 8, 2026Updated last week
- [ACL 2024] A novel QAT with Self-Distillation framework to enhance ultra low-bit LLMs.☆134May 16, 2024Updated last year
- Some preliminary explorations of Mamba's context scaling.☆218Feb 8, 2024Updated 2 years ago
- My tests and experiments with some popular dl frameworks.☆17Sep 11, 2025Updated 5 months ago
- Benchmark tests supporting the TiledCUDA library.☆18Nov 19, 2024Updated last year
- Source code for the paper "Positional Attention: Expressivity and Learnability of Algorithmic Computation"☆14May 26, 2025Updated 8 months ago
- An Attention Superoptimizer☆22Jan 20, 2025Updated last year
- 🚀 Efficient implementations of state-of-the-art linear attention models☆4,379Updated this week
- Tile-based language built for AI computation across all scales☆120Feb 8, 2026Updated last week
- ☆86Updated this week
- We invite you to visit and follow our new repository at https://github.com/microsoft/TileFusion. TiledCUDA is a highly efficient kernel …☆192Jan 28, 2025Updated last year
- Triton implementation of FlashAttention2 that adds Custom Masks.☆167Aug 14, 2024Updated last year