Efficient triton implementation of Native Sparse Attention.
☆275May 23, 2025Updated 10 months ago
Alternatives and similar repositories for native-sparse-attention-triton
Users that are interested in native-sparse-attention-triton are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- 🐳 Efficient Triton implementations for "Native Sparse Attention: Hardware-Aligned and Natively Trainable Sparse Attention"☆984Feb 5, 2026Updated 2 months ago
- Code for paper: [ICLR2025 Oral] FlexPrefill: A Context-Aware Sparse Attention Mechanism for Efficient Long-Sequence Inference☆168Oct 13, 2025Updated 6 months ago
- An efficient implementation of the NSA (Native Sparse Attention) kernel☆133Jun 24, 2025Updated 9 months ago
- qwen-nsa☆87Oct 14, 2025Updated 6 months ago
- ☆38Mar 8, 2025Updated last year
- Simple, predictable pricing with DigitalOcean hosting • AdAlways know what you'll pay with monthly caps and flat pricing. Enterprise-grade infrastructure trusted by 600k+ customers.
- Implementation of the sparse attention pattern proposed by the Deepseek team in their "Native Sparse Attention" paper☆800Aug 15, 2025Updated 8 months ago
- A sparse attention kernel supporting mix sparse patterns☆497Jan 18, 2026Updated 3 months ago
- 🤖FFPA: Extend FlashAttention-2 with Split-D, ~O(1) SRAM complexity for large headdim, 1.8x~3x↑🎉 vs SDPA EA.☆260Feb 13, 2026Updated 2 months ago
- ☆17Jul 12, 2025Updated 9 months ago
- ☆241Nov 19, 2025Updated 5 months ago
- ☆67Apr 26, 2025Updated 11 months ago
- 🚀 Efficient implementations for emerging model architectures☆4,878Updated this week
- This is the repo for our paper "Mr-Ben: A Comprehensive Meta-Reasoning Benchmark for Large Language Models"☆51Oct 31, 2024Updated last year
- ☆110Mar 12, 2026Updated last month
- AI Agents on DigitalOcean Gradient AI Platform • AdBuild production-ready AI agents using customizable tools or access multiple LLMs through a single endpoint. Create custom knowledge bases or connect external data.
- ☆119May 19, 2025Updated 11 months ago
- research impl of Native Sparse Attention (2502.11089)☆63Feb 19, 2025Updated last year
- Flash-Muon: An Efficient Implementation of Muon Optimizer☆248Jun 15, 2025Updated 10 months ago
- ☆48Dec 13, 2025Updated 4 months ago
- Using FlexAttention to compute attention with different masking patterns☆47Sep 22, 2024Updated last year
- ☆51May 19, 2025Updated 11 months ago
- ☆100Feb 11, 2026Updated 2 months ago
- A Distributed Attention Towards Linear Scalability for Ultra-Long Context, Heterogeneous Data Training☆781Apr 8, 2026Updated last week
- ☆27Mar 29, 2025Updated last year
- GPU virtual machines on DigitalOcean Gradient AI • AdGet to production fast with high-performance AMD and NVIDIA GPUs you can spin up in seconds. The definition of operational simplicity.
- Effective transpose on Hopper GPU☆28Sep 6, 2025Updated 7 months ago
- Xmixers: A collection of SOTA efficient token/channel mixers☆28Sep 4, 2025Updated 7 months ago
- Explorations into the proposed SDFT, Self-Distillation Enables Continual Learning, from Shenfeld et al. of MIT☆31Feb 6, 2026Updated 2 months ago
- ☆68Mar 21, 2025Updated last year
- Implement Flash Attention using Cute.☆105Dec 17, 2024Updated last year
- Helpful tools and examples for working with flex-attention☆1,174Updated this week
- ☆139May 29, 2025Updated 10 months ago
- [ICML2025] SpargeAttention: A training-free sparse attention that accelerates any model inference.☆976Feb 25, 2026Updated last month
- MoBA: Mixture of Block Attention for Long-Context LLMs☆2,093Apr 3, 2025Updated last year
- Deploy open-source AI quickly and easily - Bonus Offer • AdRunpod Hub is built for open source. One-click deployment and autoscaling endpoints without provisioning your own infrastructure.
- ☆124May 28, 2024Updated last year
- Ring attention implementation with flash attention☆1,006Sep 10, 2025Updated 7 months ago
- [ICLR 2025] COAT: Compressing Optimizer States and Activation for Memory-Efficient FP8 Training☆261Aug 9, 2025Updated 8 months ago
- [NeurIPS 2025] Scaling Language-centric Omnimodal Representation Learning☆38Updated this week
- 🔥 A minimal training framework for scaling FLA models☆370Nov 15, 2025Updated 5 months ago
- DeeperGEMM: crazy optimized version☆86May 5, 2025Updated 11 months ago
- ☆12Nov 5, 2024Updated last year