xiayuqing0622 / flex_head_faLinks
Fast and memory-efficient exact attention
☆75Updated 10 months ago
Alternatives and similar repositories for flex_head_fa
Users that are interested in flex_head_fa are comparing it to the libraries listed below
Sorting:
- The evaluation framework for training-free sparse attention in LLMs☆114Updated this week
- The simplest implementation of recent Sparse Attention patterns for efficient LLM inference.☆92Updated 6 months ago
- Flash-Muon: An Efficient Implementation of Muon Optimizer☆229Updated 7 months ago
- Tree Attention: Topology-aware Decoding for Long-Context Attention on GPU clusters☆131Updated last year
- ☆104Updated 11 months ago
- ☆83Updated 2 years ago
- ☆132Updated 8 months ago
- An efficient implementation of the NSA (Native Sparse Attention) kernel☆128Updated 7 months ago
- ☆269Updated 7 months ago
- ☆150Updated 2 years ago
- Stick-breaking attention☆62Updated 7 months ago
- Triton-based implementation of Sparse Mixture of Experts.☆263Updated 3 months ago
- Official implementation for Training LLMs with MXFP4☆118Updated 9 months ago
- ☆44Updated 3 months ago
- Linear Attention Sequence Parallelism (LASP)☆88Updated last year
- Code for exploring Based models from "Simple linear attention language models balance the recall-throughput tradeoff"☆246Updated 7 months ago
- CUDA and Triton implementations of Flash Attention with SoftmaxN.☆73Updated last year
- Accelerate LLM preference tuning via prefix sharing with a single line of code☆51Updated 6 months ago
- ☆158Updated 11 months ago
- Transformers components but in Triton☆34Updated 8 months ago
- Boosting 4-bit inference kernels with 2:4 Sparsity☆93Updated last year
- ☆74Updated this week
- Here we will test various linear attention designs.☆62Updated last year
- 16-fold memory access reduction with nearly no loss☆109Updated 10 months ago
- Triton implementation of FlashAttention2 that adds Custom Masks.☆163Updated last year
- ☆63Updated 7 months ago
- Cold Compress is a hackable, lightweight, and open-source toolkit for creating and benchmarking cache compression methods built on top of…☆148Updated last year
- Flash Attention in 300-500 lines of CUDA/C++☆36Updated 5 months ago
- Efficient triton implementation of Native Sparse Attention.☆261Updated 8 months ago
- Experiments on Multi-Head Latent Attention☆99Updated last year