Efficient triton implementation of Native Sparse Attention.
β272May 23, 2025Updated 10 months ago
Alternatives and similar repositories for native-sparse-attention-triton
Users that are interested in native-sparse-attention-triton are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- π³ Efficient Triton implementations for "Native Sparse Attention: Hardware-Aligned and Natively Trainable Sparse Attention"β978Feb 5, 2026Updated last month
- Code for paper: [ICLR2025 Oral] FlexPrefill: A Context-Aware Sparse Attention Mechanism for Efficient Long-Sequence Inferenceβ167Oct 13, 2025Updated 5 months ago
- An efficient implementation of the NSA (Native Sparse Attention) kernelβ132Jun 24, 2025Updated 9 months ago
- qwen-nsaβ87Oct 14, 2025Updated 5 months ago
- Implementation of the sparse attention pattern proposed by the Deepseek team in their "Native Sparse Attention" paperβ799Aug 15, 2025Updated 7 months ago
- NordVPN Threat Protection Proβ’ β’ AdTake your cybersecurity to the next level. Block phishing, malware, trackers, and ads. Lightweight app that works with all browsers.
- A sparse attention kernel supporting mix sparse patternsβ485Jan 18, 2026Updated 2 months ago
- π€FFPA: Extend FlashAttention-2 with Split-D, ~O(1) SRAM complexity for large headdim, 1.8x~3xβπ vs SDPA EA.β255Feb 13, 2026Updated last month
- β17Jul 12, 2025Updated 8 months ago
- β240Nov 19, 2025Updated 4 months ago
- β65Apr 26, 2025Updated 11 months ago
- π Efficient implementations of state-of-the-art linear attention modelsβ4,692Updated this week
- β109Mar 12, 2026Updated 2 weeks ago
- This is the repo for our paper "Mr-Ben: A Comprehensive Meta-Reasoning Benchmark for Large Language Models"β51Oct 31, 2024Updated last year
- β119May 19, 2025Updated 10 months ago
- Wordpress hosting with auto-scaling on Cloudways β’ AdFully Managed hosting built for WordPress-powered businesses that need reliable, auto-scalable hosting. Cloudways SafeUpdates now available.
- research impl of Native Sparse Attention (2502.11089)β63Feb 19, 2025Updated last year
- Flash-Muon: An Efficient Implementation of Muon Optimizerβ247Jun 15, 2025Updated 9 months ago
- β48Dec 13, 2025Updated 3 months ago
- Using FlexAttention to compute attention with different masking patternsβ47Sep 22, 2024Updated last year
- β52May 19, 2025Updated 10 months ago
- A Distributed Attention Towards Linear Scalability for Ultra-Long Context, Heterogeneous Data Trainingβ723Updated this week
- β97Feb 11, 2026Updated last month
- β27Mar 29, 2025Updated last year
- Implement Flash Attention using Cute.β103Dec 17, 2024Updated last year
- 1-Click AI Models by DigitalOcean Gradient β’ AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click and start building anything your business needs.
- Effective transpose on Hopper GPUβ28Sep 6, 2025Updated 6 months ago
- Xmixers: A collection of SOTA efficient token/channel mixersβ28Sep 4, 2025Updated 6 months ago
- Explorations into the proposed SDFT, Self-Distillation Enables Continual Learning, from Shenfeld et al. of MITβ30Feb 6, 2026Updated last month
- β68Mar 21, 2025Updated last year
- Helpful tools and examples for working with flex-attentionβ1,161Feb 8, 2026Updated last month
- β136May 29, 2025Updated 10 months ago
- [ICML2025] SpargeAttention: A training-free sparse attention that accelerates any model inference.β969Feb 25, 2026Updated last month
- MoBA: Mixture of Block Attention for Long-Context LLMsβ2,086Apr 3, 2025Updated 11 months ago
- Ring attention implementation with flash attentionβ998Sep 10, 2025Updated 6 months ago
- Proton VPN Special Offer - Get 70% off β’ AdSpecial partner offer. Trusted by over 100 million users worldwide. Tested, Approved and Recommended by Experts.
- β124May 28, 2024Updated last year
- [ICLR 2025] COAT: Compressing Optimizer States and Activation for Memory-Efficient FP8 Trainingβ262Aug 9, 2025Updated 7 months ago
- [NeurIPS 2025] Scaling Language-centric Omnimodal Representation Learningβ38Feb 6, 2026Updated last month
- DeeperGEMM: crazy optimized versionβ75May 5, 2025Updated 10 months ago
- π₯ A minimal training framework for scaling FLA modelsβ359Nov 15, 2025Updated 4 months ago
- Domain-specific language designed to streamline the development of high-performance GPU/CPU/Accelerators kernelsβ5,432Updated this week
- β12Nov 5, 2024Updated last year