xiayuqing0622 / flex_head_faLinks
Fast and memory-efficient exact attention
☆74Updated 9 months ago
Alternatives and similar repositories for flex_head_fa
Users that are interested in flex_head_fa are comparing it to the libraries listed below
Sorting:
- The evaluation framework for training-free sparse attention in LLMs☆106Updated 2 months ago
- The simplest implementation of recent Sparse Attention patterns for efficient LLM inference.☆91Updated 5 months ago
- Flash-Muon: An Efficient Implementation of Muon Optimizer☆223Updated 6 months ago
- ☆133Updated 6 months ago
- ☆83Updated 2 years ago
- ☆155Updated 10 months ago
- An efficient implementation of the NSA (Native Sparse Attention) kernel☆126Updated 5 months ago
- Triton-based implementation of Sparse Mixture of Experts.☆257Updated 2 months ago
- ☆150Updated 2 years ago
- ☆259Updated 6 months ago
- Stick-breaking attention☆62Updated 5 months ago
- Tree Attention: Topology-aware Decoding for Long-Context Attention on GPU clusters☆131Updated last year
- ☆101Updated 9 months ago
- Boosting 4-bit inference kernels with 2:4 Sparsity☆89Updated last year
- GEAR: An Efficient KV Cache Compression Recipefor Near-Lossless Generative Inference of LLM☆172Updated last year
- [ICLR2025] Breaking Throughput-Latency Trade-off for Long Sequences with Speculative Decoding☆135Updated last year
- Experiments on Multi-Head Latent Attention☆99Updated last year
- Triton implementation of FlashAttention2 that adds Custom Masks.☆155Updated last year
- Official implementation for Training LLMs with MXFP4☆115Updated 7 months ago
- Linear Attention Sequence Parallelism (LASP)☆88Updated last year
- Transformers components but in Triton☆34Updated 7 months ago
- ☆115Updated last year
- Code for exploring Based models from "Simple linear attention language models balance the recall-throughput tradeoff"☆243Updated 6 months ago
- ☆42Updated last month
- Flash Attention in 300-500 lines of CUDA/C++☆36Updated 4 months ago
- Efficient triton implementation of Native Sparse Attention.☆254Updated 7 months ago
- ☆53Updated last year
- The source code of our work "Prepacking: A Simple Method for Fast Prefilling and Increased Throughput in Large Language Models" [AISTATS …☆60Updated last year
- Accelerate LLM preference tuning via prefix sharing with a single line of code☆51Updated 5 months ago
- 🔥 LLM-powered GPU kernel synthesis: Train models to convert PyTorch ops into optimized Triton kernels via SFT+RL. Multi-turn compilation…☆107Updated last month