Implementation of the sparse attention pattern proposed by the Deepseek team in their "Native Sparse Attention" paper
β799Aug 15, 2025Updated 7 months ago
Alternatives and similar repositories for native-sparse-attention-pytorch
Users that are interested in native-sparse-attention-pytorch are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- π³ Efficient Triton implementations for "Native Sparse Attention: Hardware-Aligned and Natively Trainable Sparse Attention"β978Feb 5, 2026Updated last month
- Implementation of the proposed DeepCrossAttention by Heddes et al at Google research, in Pytorchβ96Feb 24, 2025Updated last year
- MoBA: Mixture of Block Attention for Long-Context LLMsβ2,086Apr 3, 2025Updated 11 months ago
- Efficient triton implementation of Native Sparse Attention.β272May 23, 2025Updated 10 months ago
- research impl of Native Sparse Attention (2502.11089)β63Feb 19, 2025Updated last year
- DigitalOcean Gradient AI Platform β’ AdBuild production-ready AI agents using customizable tools or access multiple LLMs through a single endpoint. Create custom knowledge bases or connect external data.
- DeepSeek Native Sparse Attention pytorch implementationβ115Dec 17, 2025Updated 3 months ago
- π Efficient implementations of state-of-the-art linear attention modelsβ4,692Updated this week
- Explorations into adversarial losses on top of autoregressive loss for language modelingβ41Dec 21, 2025Updated 3 months ago
- Muon is Scalable for LLM Trainingβ1,450Aug 3, 2025Updated 7 months ago
- A Distributed Attention Towards Linear Scalability for Ultra-Long Context, Heterogeneous Data Trainingβ723Updated this week
- An efficient implementation of the NSA (Native Sparse Attention) kernelβ132Jun 24, 2025Updated 9 months ago
- Helpful tools and examples for working with flex-attentionβ1,161Feb 8, 2026Updated last month
- Unofficial implementation of Titans, SOTA memory for transformers, in Pytorchβ1,935Feb 9, 2026Updated last month
- TransMLA: Multi-Head Latent Attention Is All You Need (NeurIPS 2025 Spotlight)β435Feb 28, 2026Updated last month
- Managed Database hosting by DigitalOcean β’ AdPostgreSQL, MySQL, MongoDB, Kafka, Valkey, and OpenSearch available. Automatically scale up storage and focus on building your apps.
- The Gaussian Histogram Loss (HL-Gauss) proposed by Imani et al. with a few convenient wrappers for regression, in Pytorchβ73Nov 18, 2025Updated 4 months ago
- Implementation of 2-simplicial attention proposed by Clift et al. (2019) and the recent attempt to make practical in Fast and Simplex, Roβ¦β46Sep 2, 2025Updated 6 months ago
- Implementation of π Ring Attention, from Liu et al. at Berkeley AI, in Pytorchβ548May 16, 2025Updated 10 months ago
- SpInfer: Leveraging Low-Level Sparsity for Efficient Large Language Model Inference on GPUsβ62Mar 25, 2025Updated last year
- DeepGEMM: clean and efficient FP8 GEMM kernels with fine-grained scalingβ22Mar 18, 2026Updated last week
- [ICML 2025] XAttention: Block Sparse Attention with Antidiagonal Scoringβ274Jul 6, 2025Updated 8 months ago
- [ICML2025] SpargeAttention: A training-free sparse attention that accelerates any model inference.β969Feb 25, 2026Updated last month
- DeeperGEMM: crazy optimized versionβ75May 5, 2025Updated 10 months ago
- Efficient LLM Inference over Long Sequencesβ394Jun 25, 2025Updated 9 months ago
- Wordpress hosting with auto-scaling on Cloudways β’ AdFully Managed hosting built for WordPress-powered businesses that need reliable, auto-scalable hosting. Cloudways SafeUpdates now available.
- FlashMLA: Efficient Multi-head Latent Attention Kernelsβ12,541Feb 6, 2026Updated last month
- β52May 19, 2025Updated 10 months ago
- [ICLR2025, ICML2025, NeurIPS2025 Spotlight] Quantized Attention achieves speedup of 2-5x compared to FlashAttention, without losing end-tβ¦β3,249Jan 17, 2026Updated 2 months ago
- A bidirectional pipeline parallelism algorithm for computation-communication overlap in DeepSeek V3/R1 training.β2,936Jan 14, 2026Updated 2 months ago
- β119May 19, 2025Updated 10 months ago
- Fast and memory-efficient exact attentionβ22,938Updated this week
- β48Dec 13, 2025Updated 3 months ago
- Lightning Attention-2: A Free Lunch for Handling Unlimited Sequence Lengths in Large Language Modelsβ341Feb 23, 2025Updated last year
- Quick implementation of nGPT, learning entirely on the hypersphere, from NvidiaAIβ293Jun 3, 2025Updated 9 months ago
- Managed hosting for WordPress and PHP on Cloudways β’ AdManaged hosting with the flexibility to host WordPress, Magento, Laravel, or PHP apps, on multiple cloud providers. Cloudways by DigitalOcean.
- DeepGEMM: clean and efficient FP8 GEMM kernels with fine-grained scalingβ6,289Mar 22, 2026Updated last week
- Exploring an idea where one forgets about efficiency and carries out attention across each edge of the nodes (tokens)β55Mar 25, 2025Updated last year
- π€FFPA: Extend FlashAttention-2 with Split-D, ~O(1) SRAM complexity for large headdim, 1.8x~3xβπ vs SDPA EA.β255Feb 13, 2026Updated last month
- Distributed Compiler based on Triton for Parallel Systemsβ1,398Mar 11, 2026Updated 2 weeks ago
- β65Apr 26, 2025Updated 11 months ago
- A PyTorch native platform for training generative AI modelsβ5,191Updated this week
- OctGPT: Octree-based Multiscale Autoregressive Models for 3D Shape Generation [SIGGRAPH 2025]β201Sep 18, 2025Updated 6 months ago