lucidrains / native-sparse-attention-pytorchView external linksLinks
Implementation of the sparse attention pattern proposed by the Deepseek team in their "Native Sparse Attention" paper
โ797Aug 15, 2025Updated 6 months ago
Alternatives and similar repositories for native-sparse-attention-pytorch
Users that are interested in native-sparse-attention-pytorch are comparing it to the libraries listed below
Sorting:
- ๐ณ Efficient Triton implementations for "Native Sparse Attention: Hardware-Aligned and Natively Trainable Sparse Attention"โ964Feb 5, 2026Updated last week
- Implementation of the proposed DeepCrossAttention by Heddes et al at Google research, in Pytorchโ96Feb 24, 2025Updated 11 months ago
- MoBA: Mixture of Block Attention for Long-Context LLMsโ2,044Apr 3, 2025Updated 10 months ago
- Efficient triton implementation of Native Sparse Attention.โ262May 23, 2025Updated 8 months ago
- research impl of Native Sparse Attention (2502.11089)โ63Feb 19, 2025Updated 11 months ago
- DeepSeek Native Sparse Attention pytorch implementationโ115Dec 17, 2025Updated last month
- ๐ Efficient implementations of state-of-the-art linear attention modelsโ4,379Updated this week
- A Distributed Attention Towards Linear Scalability for Ultra-Long Context, Heterogeneous Data Trainingโ639Updated this week
- Explorations into adversarial losses on top of autoregressive loss for language modelingโ41Dec 21, 2025Updated last month
- Muon is Scalable for LLM Trainingโ1,426Aug 3, 2025Updated 6 months ago
- An efficient implementation of the NSA (Native Sparse Attention) kernelโ129Jun 24, 2025Updated 7 months ago
- Helpful tools and examples for working with flex-attentionโ1,127Feb 8, 2026Updated last week
- [ICML2025] SpargeAttention: A training-free sparse attention that accelerates any model inference.โ927Dec 31, 2025Updated last month
- TransMLA: Multi-Head Latent Attention Is All You Need (NeurIPS 2025 Spotlight)โ429Sep 23, 2025Updated 4 months ago
- Unofficial implementation of Titans, SOTA memory for transformers, in Pytorchโ1,935Updated this week
- SpInfer: Leveraging Low-Level Sparsity for Efficient Large Language Model Inference on GPUsโ61Mar 25, 2025Updated 10 months ago
- Implementation of ๐ Ring Attention, from Liu et al. at Berkeley AI, in Pytorchโ549May 16, 2025Updated 9 months ago
- [ICML 2025] XAttention: Block Sparse Attention with Antidiagonal Scoringโ269Jul 6, 2025Updated 7 months ago
- [ICLR2025, ICML2025, NeurIPS2025 Spotlight] Quantized Attention achieves speedup of 2-5x compared to FlashAttention, without losing end-tโฆโ3,159Jan 17, 2026Updated 3 weeks ago
- The Gaussian Histogram Loss (HL-Gauss) proposed by Imani et al. with a few convenient wrappers for regression, in Pytorchโ73Nov 18, 2025Updated 2 months ago
- Exploring an idea where one forgets about efficiency and carries out attention across each edge of the nodes (tokens)โ55Mar 25, 2025Updated 10 months ago
- Efficient LLM Inference over Long Sequencesโ394Jun 25, 2025Updated 7 months ago
- Quick implementation of nGPT, learning entirely on the hypersphere, from NvidiaAIโ293Jun 3, 2025Updated 8 months ago
- FlashMLA: Efficient Multi-head Latent Attention Kernelsโ12,456Feb 6, 2026Updated last week
- A bidirectional pipeline parallelism algorithm for computation-communication overlap in DeepSeek V3/R1 training.โ2,919Jan 14, 2026Updated last month
- DeeperGEMM: crazy optimized versionโ73May 5, 2025Updated 9 months ago
- Implementation of 2-simplicial attention proposed by Clift et al. (2019) and the recent attempt to make practical in Fast and Simplex, Roโฆโ46Sep 2, 2025Updated 5 months ago
- OctGPT: Octree-based Multiscale Autoregressive Models for 3D Shape Generation [SIGGRAPH 2025]โ197Sep 18, 2025Updated 4 months ago
- DeepGEMM: clean and efficient FP8 GEMM kernels with fine-grained scalingโ22Updated this week
- Distributed Compiler based on Triton for Parallel Systemsโ1,350Updated this week
- โ65Apr 26, 2025Updated 9 months ago
- โ52May 19, 2025Updated 8 months ago
- A PyTorch native platform for training generative AI modelsโ5,045Feb 8, 2026Updated last week
- DeepGEMM: clean and efficient FP8 GEMM kernels with fine-grained scalingโ6,162Feb 3, 2026Updated last week
- โ118May 19, 2025Updated 8 months ago
- Lightning Attention-2: A Free Lunch for Handling Unlimited Sequence Lengths in Large Language Modelsโ341Feb 23, 2025Updated 11 months ago
- Fast and memory-efficient exact attentionโ22,231Updated this week
- ไฝฟ็จ cutlass ไปๅบๅจ ada ๆถๆไธๅฎ็ฐ fp8 ็ flash attentionโ78Aug 12, 2024Updated last year
- Framework to reduce autotune overhead to zero for well known deployments.โ96Sep 19, 2025Updated 4 months ago