flagos-ai / FlagAttentionView external linksLinks
A collection of memory efficient attention operators implemented in the Triton language.
☆287Jun 5, 2024Updated last year
Alternatives and similar repositories for FlagAttention
Users that are interested in FlagAttention are comparing it to the libraries listed below
Sorting:
- FlagGems is an operator library for large language models implemented in the Triton Language.☆898Updated this week
- ☆104Nov 7, 2024Updated last year
- Quantized Attention on GPU☆44Nov 22, 2024Updated last year
- We invite you to visit and follow our new repository at https://github.com/microsoft/TileFusion. TiledCUDA is a highly efficient kernel …☆192Jan 28, 2025Updated last year
- ☆261Jul 11, 2024Updated last year
- ☆288Updated this week
- Benchmark tests supporting the TiledCUDA library.☆18Nov 19, 2024Updated last year
- Collection of kernels written in Triton language☆178Jan 27, 2026Updated 2 weeks ago
- Framework to reduce autotune overhead to zero for well known deployments.☆96Sep 19, 2025Updated 4 months ago
- flash attention tutorial written in python, triton, cuda, cutlass☆486Jan 20, 2026Updated 3 weeks ago
- Cataloging released Triton kernels.☆294Sep 9, 2025Updated 5 months ago
- A subset of PyTorch's neural network modules, written in Python using OpenAI's Triton.☆596Aug 12, 2025Updated 6 months ago
- 使用 cutlass 仓库在 ada 架构上实现 fp8 的 flash attention☆78Aug 12, 2024Updated last year
- Decoding Attention is specially optimized for MHA, MQA, GQA and MLA using CUDA core for the decoding stage of LLM inference.☆46Jun 11, 2025Updated 8 months ago
- Examples of CUDA implementations by Cutlass CuTe☆270Jul 1, 2025Updated 7 months ago
- 🤖FFPA: Extend FlashAttention-2 with Split-D, ~O(1) SRAM complexity for large headdim, 1.8x~3x↑🎉 vs SDPA EA.☆251Updated this week
- Tritonbench is a collection of PyTorch custom operators with example inputs to measure their performance.☆326Updated this week
- A Easy-to-understand TensorOp Matmul Tutorial☆410Updated this week
- Applied AI experiments and examples for PyTorch☆315Aug 22, 2025Updated 5 months ago
- Transformers components but in Triton☆34May 9, 2025Updated 9 months ago
- ☆118May 19, 2025Updated 8 months ago
- Tile primitives for speedy kernels☆3,139Updated this week
- BitBLAS is a library to support mixed-precision matrix multiplications, especially for quantized LLM deployment.☆752Aug 6, 2025Updated 6 months ago
- Triton-based implementation of Sparse Mixture of Experts.☆265Oct 3, 2025Updated 4 months ago
- Experiment of using Tangent to autodiff triton☆82Jan 22, 2024Updated 2 years ago
- Implement Flash Attention using Cute.☆100Dec 17, 2024Updated last year
- FlexAttention w/ FlashAttention3 Support☆27Oct 5, 2024Updated last year
- ☆85Jan 23, 2025Updated last year
- DeeperGEMM: crazy optimized version☆74May 5, 2025Updated 9 months ago
- Distributed Compiler based on Triton for Parallel Systems☆1,350Updated this week
- 🚀 Efficient implementations of state-of-the-art linear attention models☆4,379Updated this week
- Shared Middle-Layer for Triton Compilation☆326Dec 5, 2025Updated 2 months ago
- Awesome Triton Resources☆39Apr 27, 2025Updated 9 months ago
- ☆105Sep 9, 2024Updated last year
- A Quirky Assortment of CuTe Kernels☆798Updated this week
- FlashInfer: Kernel Library for LLM Serving☆4,935Updated this week
- how to optimize some algorithm in cuda.☆2,819Updated this week
- optimized BERT transformer inference on NVIDIA GPU. https://arxiv.org/abs/2210.03052☆477Mar 15, 2024Updated last year
- Dynamic Memory Management for Serving LLMs without PagedAttention☆458May 30, 2025Updated 8 months ago