π³ Efficient Triton implementations for "Native Sparse Attention: Hardware-Aligned and Natively Trainable Sparse Attention"
β993Feb 5, 2026Updated 2 months ago
Alternatives and similar repositories for native-sparse-attention
Users that are interested in native-sparse-attention are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Implementation of the sparse attention pattern proposed by the Deepseek team in their "Native Sparse Attention" paperβ804Aug 15, 2025Updated 8 months ago
- Efficient triton implementation of Native Sparse Attention.β275May 23, 2025Updated 11 months ago
- π Efficient implementations for emerging model architecturesβ4,999Updated this week
- MoBA: Mixture of Block Attention for Long-Context LLMsβ2,108Apr 3, 2025Updated last year
- An efficient implementation of the NSA (Native Sparse Attention) kernelβ133Jun 24, 2025Updated 10 months ago
- Open source password manager - Proton Pass β’ AdSecurely store, share, and autofill your credentials with Proton Pass, the end-to-end encrypted password manager trusted by millions.
- A sparse attention kernel supporting mix sparse patternsβ503Jan 18, 2026Updated 3 months ago
- qwen-nsaβ87Oct 14, 2025Updated 6 months ago
- Muon is Scalable for LLM Trainingβ1,469Aug 3, 2025Updated 9 months ago
- Distributed Compiler based on Triton for Parallel Systemsβ1,420Apr 22, 2026Updated last week
- Quantized Attention on GPUβ44Nov 22, 2024Updated last year
- Helpful tools and examples for working with flex-attentionβ1,179Apr 13, 2026Updated 2 weeks ago
- FlashInfer: Kernel Library for LLM Servingβ5,544Updated this week
- [ICLR 2025] DuoAttention: Efficient Long-Context LLM Inference with Retrieval and Streaming Headsβ540Feb 10, 2025Updated last year
- [ICML2025] SpargeAttention: A training-free sparse attention that accelerates any model inference.β991Feb 25, 2026Updated 2 months ago
- Deploy on Railway without the complexity - Free Credits Offer β’ AdConnect your repo and Railway handles the rest with instant previews. Quickly provision container image services, databases, and storage volumes.
- π₯ A minimal training framework for scaling FLA modelsβ385Apr 22, 2026Updated last week
- A Distributed Attention Towards Linear Scalability for Ultra-Long Context, Heterogeneous Data Trainingβ795Apr 21, 2026Updated last week
- Ring attention implementation with flash attentionβ1,014Sep 10, 2025Updated 7 months ago
- [NeurIPS'24 Spotlight, ICLR'25, ICML'25] To speed up Long-context LLMs' inference, approximate and dynamic sparse calculate the attentionβ¦β1,210Apr 8, 2026Updated 3 weeks ago
- [ICML 2024] Quest: Query-Aware Sparsity for Efficient Long-Context LLM Inferenceβ382Jul 10, 2025Updated 9 months ago
- Domain-specific language designed to streamline the development of high-performance GPU/CPU/Accelerators kernelsβ5,928Updated this week
- β114Feb 25, 2025Updated last year
- [MLSys'25] QServe: W4A8KV4 Quantization and System Co-design for Efficient LLM Serving; [MLSys'25] LServe: Efficient Long-sequence LLM Seβ¦β834Mar 6, 2025Updated last year
- DeeperGEMM: crazy optimized versionβ86May 5, 2025Updated 11 months ago
- Deploy on Railway without the complexity - Free Credits Offer β’ AdConnect your repo and Railway handles the rest with instant previews. Quickly provision container image services, databases, and storage volumes.
- DeepSeek Native Sparse Attention pytorch implementationβ117Dec 17, 2025Updated 4 months ago
- β66Apr 26, 2025Updated last year
- [ICLR 2025] COAT: Compressing Optimizer States and Activation for Memory-Efficient FP8 Trainingβ262Aug 9, 2025Updated 8 months ago
- Efficient Triton Kernels for LLM Trainingβ6,315Updated this week
- Code for paper: [ICLR2025 Oral] FlexPrefill: A Context-Aware Sparse Attention Mechanism for Efficient Long-Sequence Inferenceβ168Oct 13, 2025Updated 6 months ago
- β244Nov 19, 2025Updated 5 months ago
- β52May 19, 2025Updated 11 months ago
- A fast communication-overlapping library for tensor/expert parallelism on GPUs.β1,295Aug 28, 2025Updated 8 months ago
- USP: Unified (a.k.a. Hybrid, 2D) Sequence Parallel Attention for Long Context Transformers Model Training and Inferenceβ666Jan 15, 2026Updated 3 months ago
- Managed hosting for WordPress and PHP on Cloudways β’ AdManaged hosting for WordPress, Magento, Laravel, or PHP apps, on multiple cloud providers. Deploy in minutes on Cloudways by DigitalOcean.
- Tile primitives for speedy kernelsβ3,326Apr 25, 2026Updated last week
- BitBLAS is a library to support mixed-precision matrix multiplications, especially for quantized LLM deployment.β762Aug 6, 2025Updated 8 months ago
- flash attention tutorial written in python, triton, cuda, cutlassβ506Jan 20, 2026Updated 3 months ago
- Efficient LLM Inference over Long Sequencesβ394Jun 25, 2025Updated 10 months ago
- Fast low-bit matmul kernels in Tritonβ445Apr 24, 2026Updated last week
- A bidirectional pipeline parallelism algorithm for computation-communication overlap in DeepSeek V3/R1 training.β2,951Jan 14, 2026Updated 3 months ago
- DeepGEMM: clean and efficient FP8 GEMM kernels with fine-grained scalingβ7,144Apr 24, 2026Updated last week