XunhaoLai / native-sparse-attention-tritonView external linksLinks
Efficient triton implementation of Native Sparse Attention.
☆263May 23, 2025Updated 8 months ago
Alternatives and similar repositories for native-sparse-attention-triton
Users that are interested in native-sparse-attention-triton are comparing it to the libraries listed below
Sorting:
- 🐳 Efficient Triton implementations for "Native Sparse Attention: Hardware-Aligned and Natively Trainable Sparse Attention"☆965Feb 5, 2026Updated last week
- Code for paper: [ICLR2025 Oral] FlexPrefill: A Context-Aware Sparse Attention Mechanism for Efficient Long-Sequence Inference☆160Oct 13, 2025Updated 4 months ago
- An efficient implementation of the NSA (Native Sparse Attention) kernel☆129Jun 24, 2025Updated 7 months ago
- qwen-nsa☆87Oct 14, 2025Updated 4 months ago
- 🤖FFPA: Extend FlashAttention-2 with Split-D, ~O(1) SRAM complexity for large headdim, 1.8x~3x↑🎉 vs SDPA EA.☆251Updated this week
- ☆65Apr 26, 2025Updated 9 months ago
- Using FlexAttention to compute attention with different masking patterns☆47Sep 22, 2024Updated last year
- ☆52May 19, 2025Updated 8 months ago
- A sparse attention kernel supporting mix sparse patterns☆455Jan 18, 2026Updated 3 weeks ago
- Explorations into the proposed SDFT, Self-Distillation Enables Continual Learning, from Shenfeld et al. of MIT☆29Feb 6, 2026Updated last week
- Xmixers: A collection of SOTA efficient token/channel mixers☆28Sep 4, 2025Updated 5 months ago
- Flash-Muon: An Efficient Implementation of Muon Optimizer☆235Jun 15, 2025Updated 8 months ago
- 🚀 Efficient implementations of state-of-the-art linear attention models☆4,379Updated this week
- ☆104Nov 7, 2024Updated last year
- A Distributed Attention Towards Linear Scalability for Ultra-Long Context, Heterogeneous Data Training☆639Updated this week
- ☆223Nov 19, 2025Updated 2 months ago
- ☆118May 19, 2025Updated 8 months ago
- Source-to-Source Debuggable Derivatives in Pure Python☆15Jan 23, 2024Updated 2 years ago
- ☆124May 28, 2024Updated last year
- DeeperGEMM: crazy optimized version☆74May 5, 2025Updated 9 months ago
- [ICML2025] SpargeAttention: A training-free sparse attention that accelerates any model inference.☆939Dec 31, 2025Updated last month
- ☆22May 5, 2025Updated 9 months ago
- Towards Economical Inference: Enabling DeepSeek's Multi-Head Latent Attention in Any Transformer-based LLMs☆204Dec 4, 2025Updated 2 months ago
- ☆131May 29, 2025Updated 8 months ago
- Implement Flash Attention using Cute.☆100Dec 17, 2024Updated last year
- research impl of Native Sparse Attention (2502.11089)☆63Feb 19, 2025Updated 11 months ago
- Helpful tools and examples for working with flex-attention☆1,127Feb 8, 2026Updated last week
- A fusion of a linear layer and a cross entropy loss, written for pytorch in triton.☆75Aug 2, 2024Updated last year
- ☆106Feb 25, 2025Updated 11 months ago
- ☆67Mar 21, 2025Updated 10 months ago
- Ring attention implementation with flash attention☆980Sep 10, 2025Updated 5 months ago
- ☆93Updated this week
- Tritonbench is a collection of PyTorch custom operators with example inputs to measure their performance.☆326Updated this week
- [ICLR 2025] COAT: Compressing Optimizer States and Activation for Memory-Efficient FP8 Training☆258Aug 9, 2025Updated 6 months ago
- 🔥 A minimal training framework for scaling FLA models☆344Nov 15, 2025Updated 3 months ago
- ☆27Mar 29, 2025Updated 10 months ago
- Experiment of using Tangent to autodiff triton☆82Jan 22, 2024Updated 2 years ago
- Triton-based implementation of Sparse Mixture of Experts.☆265Oct 3, 2025Updated 4 months ago
- [ICML 2025] XAttention: Block Sparse Attention with Antidiagonal Scoring☆269Jul 6, 2025Updated 7 months ago