Implementation of the sparse attention pattern proposed by the Deepseek team in their "Native Sparse Attention" paper
โ803Aug 15, 2025Updated 8 months ago
Alternatives and similar repositories for native-sparse-attention-pytorch
Users that are interested in native-sparse-attention-pytorch are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- ๐ณ Efficient Triton implementations for "Native Sparse Attention: Hardware-Aligned and Natively Trainable Sparse Attention"โ995Feb 5, 2026Updated 3 months ago
- Implementation of the proposed DeepCrossAttention by Heddes et al at Google research, in Pytorchโ104Apr 3, 2026Updated last month
- Efficient triton implementation of Native Sparse Attention.โ276May 23, 2025Updated 11 months ago
- MoBA: Mixture of Block Attention for Long-Context LLMsโ2,109Apr 3, 2025Updated last year
- research impl of Native Sparse Attention (2502.11089)โ63Feb 19, 2025Updated last year
- Deploy open-source AI quickly and easily - Special Bonus Offer โข AdRunpod Hub is built for open source. One-click deployment and autoscaling endpoints without provisioning your own infrastructure.
- DeepSeek Native Sparse Attention pytorch implementationโ117Dec 17, 2025Updated 4 months ago
- ๐ Efficient implementations for emerging model architecturesโ5,032May 1, 2026Updated last week
- Explorations into adversarial losses on top of autoregressive loss for language modelingโ41Dec 21, 2025Updated 4 months ago
- An efficient implementation of the NSA (Native Sparse Attention) kernelโ133Jun 24, 2025Updated 10 months ago
- Muon is Scalable for LLM Trainingโ1,473Aug 3, 2025Updated 9 months ago
- A Distributed Attention Towards Linear Scalability for Ultra-Long Context, Heterogeneous Data Trainingโ798Apr 21, 2026Updated 2 weeks ago
- Helpful tools and examples for working with flex-attentionโ1,182Apr 13, 2026Updated 3 weeks ago
- Unofficial implementation of Titans, SOTA memory for transformers, in Pytorchโ1,952Feb 9, 2026Updated 3 months ago
- DeeperGEMM: crazy optimized versionโ86May 5, 2025Updated last year
- GPU virtual machines on DigitalOcean Gradient AI โข AdGet to production fast with high-performance AMD and NVIDIA GPUs you can spin up in seconds. The definition of operational simplicity.
- Design hardware-friendly model architectures and migrate existing LLMs with minimal performance lossโ461Apr 6, 2026Updated last month
- Implementation of 2-simplicial attention proposed by Clift et al. (2019) and the recent attempt to make practical in Fast and Simplex, Roโฆโ48Sep 2, 2025Updated 8 months ago
- The Gaussian Histogram Loss (HL-Gauss) proposed by Imani et al. with a few convenient wrappers for regression, in Pytorchโ79Apr 3, 2026Updated last month
- Implementation of ๐ Ring Attention, from Liu et al. at Berkeley AI, in Pytorchโ548May 16, 2025Updated 11 months ago
- SpInfer: Leveraging Low-Level Sparsity for Efficient Large Language Model Inference on GPUsโ64Mar 25, 2025Updated last year
- DeepGEMM: clean and efficient FP8 GEMM kernels with fine-grained scalingโ22Updated this week
- [ICML 2025] XAttention: Block Sparse Attention with Antidiagonal Scoringโ277Jul 6, 2025Updated 10 months ago
- [ICML2025] SpargeAttention: A training-free sparse attention that accelerates any model inference.โ990Feb 25, 2026Updated 2 months ago
- FlashMLA: Efficient Multi-head Latent Attention Kernelsโ12,631Apr 30, 2026Updated last week
- AI Agents on DigitalOcean Gradient AI Platform โข AdBuild production-ready AI agents using customizable tools or access multiple LLMs through a single endpoint. Create custom knowledge bases or connect external data.
- Efficient LLM Inference over Long Sequencesโ394Jun 25, 2025Updated 10 months ago
- โ52May 19, 2025Updated 11 months ago
- [ICLR2025, ICML2025, NeurIPS2025 Spotlight] Quantized Attention achieves speedup of 2-5x compared to FlashAttention, without losing end-tโฆโ3,342Jan 17, 2026Updated 3 months ago
- A bidirectional pipeline parallelism algorithm for computation-communication overlap in DeepSeek V3/R1 training.โ2,949Jan 14, 2026Updated 3 months ago
- โ119May 19, 2025Updated 11 months ago
- Fast and memory-efficient exact attentionโ23,628Updated this week
- DeepGEMM: clean and efficient FP8 GEMM kernels with fine-grained scalingโ7,200Apr 24, 2026Updated 2 weeks ago
- โ48Dec 13, 2025Updated 4 months ago
- Quick implementation of nGPT, learning entirely on the hypersphere, from NvidiaAI