Implementation of the sparse attention pattern proposed by the Deepseek team in their "Native Sparse Attention" paper
β800Aug 15, 2025Updated 8 months ago
Alternatives and similar repositories for native-sparse-attention-pytorch
Users that are interested in native-sparse-attention-pytorch are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- π³ Efficient Triton implementations for "Native Sparse Attention: Hardware-Aligned and Natively Trainable Sparse Attention"β984Feb 5, 2026Updated 2 months ago
- Implementation of the proposed DeepCrossAttention by Heddes et al at Google research, in Pytorchβ102Apr 3, 2026Updated 2 weeks ago
- MoBA: Mixture of Block Attention for Long-Context LLMsβ2,093Apr 3, 2025Updated last year
- Efficient triton implementation of Native Sparse Attention.β275May 23, 2025Updated 10 months ago
- research impl of Native Sparse Attention (2502.11089)β63Feb 19, 2025Updated last year
- Wordpress hosting with auto-scaling - Free Trial β’ AdFully Managed hosting for WordPress and WooCommerce businesses that need reliable, auto-scalable performance. Cloudways SafeUpdates now available.
- DeepSeek Native Sparse Attention pytorch implementationβ116Dec 17, 2025Updated 4 months ago
- π Efficient implementations for emerging model architecturesβ4,878Updated this week
- Explorations into adversarial losses on top of autoregressive loss for language modelingβ41Dec 21, 2025Updated 3 months ago
- Muon is Scalable for LLM Trainingβ1,458Aug 3, 2025Updated 8 months ago
- A Distributed Attention Towards Linear Scalability for Ultra-Long Context, Heterogeneous Data Trainingβ781Apr 8, 2026Updated last week
- An efficient implementation of the NSA (Native Sparse Attention) kernelβ133Jun 24, 2025Updated 9 months ago
- Helpful tools and examples for working with flex-attentionβ1,174Updated this week
- Unofficial implementation of Titans, SOTA memory for transformers, in Pytorchβ1,948Feb 9, 2026Updated 2 months ago
- DeeperGEMM: crazy optimized versionβ86May 5, 2025Updated 11 months ago
- 1-Click AI Models by DigitalOcean Gradient β’ AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click. Zero configuration with optimized deployments.
- Design hardware-friendly model architectures and migrate existing LLMs with minimal performance lossβ457Apr 6, 2026Updated last week
- Implementation of 2-simplicial attention proposed by Clift et al. (2019) and the recent attempt to make practical in Fast and Simplex, Roβ¦β47Sep 2, 2025Updated 7 months ago
- The Gaussian Histogram Loss (HL-Gauss) proposed by Imani et al. with a few convenient wrappers for regression, in Pytorchβ80Apr 3, 2026Updated 2 weeks ago
- Implementation of π Ring Attention, from Liu et al. at Berkeley AI, in Pytorchβ548May 16, 2025Updated 11 months ago
- SpInfer: Leveraging Low-Level Sparsity for Efficient Large Language Model Inference on GPUsβ64Mar 25, 2025Updated last year
- DeepGEMM: clean and efficient FP8 GEMM kernels with fine-grained scalingβ22Apr 9, 2026Updated last week
- [ICML2025] SpargeAttention: A training-free sparse attention that accelerates any model inference.β976Feb 25, 2026Updated last month
- [ICML 2025] XAttention: Block Sparse Attention with Antidiagonal Scoringβ277Jul 6, 2025Updated 9 months ago
- Efficient LLM Inference over Long Sequencesβ393Jun 25, 2025Updated 9 months ago
- Serverless GPU API endpoints on Runpod - Bonus Credits β’ AdSkip the infrastructure headaches. Auto-scaling, pay-as-you-go, no-ops approach lets you focus on innovating your application.
- FlashMLA: Efficient Multi-head Latent Attention Kernelsβ12,558Apr 7, 2026Updated last week
- β51May 19, 2025Updated 11 months ago
- [ICLR2025, ICML2025, NeurIPS2025 Spotlight] Quantized Attention achieves speedup of 2-5x compared to FlashAttention, without losing end-tβ¦β3,296Jan 17, 2026Updated 3 months ago
- A bidirectional pipeline parallelism algorithm for computation-communication overlap in DeepSeek V3/R1 training.β2,943Jan 14, 2026Updated 3 months ago
- β119May 19, 2025Updated 11 months ago
- β48Dec 13, 2025Updated 4 months ago
- Fast and memory-efficient exact attentionβ23,344Updated this week
- Lightning Attention-2: A Free Lunch for Handling Unlimited Sequence Lengths in Large Language Modelsβ343Feb 23, 2025Updated last year
- Quick implementation of nGPT, learning entirely on the hypersphere, from NvidiaAIβ293Jun 3, 2025Updated 10 months ago
- 1-Click AI Models by DigitalOcean Gradient β’ AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click. Zero configuration with optimized deployments.
- DeepGEMM: clean and efficient FP8 GEMM kernels with fine-grained scalingβ6,376Updated this week
- π€FFPA: Extend FlashAttention-2 with Split-D, ~O(1) SRAM complexity for large headdim, 1.8x~3xβπ vs SDPA EA.β260Feb 13, 2026Updated 2 months ago
- Exploring an idea where one forgets about efficiency and carries out attention across each edge of the nodes (tokens)β55Mar 25, 2025Updated last year
- Distributed Compiler based on Triton for Parallel Systemsβ1,403Apr 10, 2026Updated last week
- β67Apr 26, 2025Updated 11 months ago
- OctGPT: Octree-based Multiscale Autoregressive Models for 3D Shape Generation [SIGGRAPH 2025]β201Sep 18, 2025Updated 7 months ago
- A PyTorch native platform for training generative AI modelsβ5,242Updated this week