DeepSeek Native Sparse Attention pytorch implementation
β115Dec 17, 2025Updated 3 months ago
Alternatives and similar repositories for NSA-pytorch
Users that are interested in NSA-pytorch are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- π³ Efficient Triton implementations for "Native Sparse Attention: Hardware-Aligned and Natively Trainable Sparse Attention"β977Feb 5, 2026Updated last month
- qwen-nsaβ87Oct 14, 2025Updated 5 months ago
- An efficient implementation of the NSA (Native Sparse Attention) kernelβ131Jun 24, 2025Updated 8 months ago
- LongAttn οΌSelecting Long-context Training Data via Token-level Attentionβ15Jul 16, 2025Updated 8 months ago
- High performance RMSNorm Implement by using SM Core Storage(Registers and Shared Memory)β29Jan 22, 2026Updated 2 months ago
- Code for paper: [ICLR2025 Oral] FlexPrefill: A Context-Aware Sparse Attention Mechanism for Efficient Long-Sequence Inferenceβ164Oct 13, 2025Updated 5 months ago
- [NeurIPS 2024] Official implementation of paper "Perceiving Longer Sequences With Bi-Directional Cross-Attention Transformers"β20Mar 10, 2025Updated last year
- Creating the DeepSeek V3 model from scratchβ26Mar 28, 2025Updated 11 months ago
- A PyTorch native library for training speculative decoding modelsβ43Updated this week
- [ICLR 2025] DuoAttention: Efficient Long-Context LLM Inference with Retrieval and Streaming Headsβ531Feb 10, 2025Updated last year
- C++ library for finding Strongly Connected Components in parallel, based on paper: https://dl.acm.org/citation.cfm?id=2851161β12May 22, 2018Updated 7 years ago
- Sample Codes using NVSHMEM on Multi-GPUβ30Jan 22, 2023Updated 3 years ago
- β128Updated this week
- DeepSeek-V3.2-Exp DSA Warmup Lightning Indexer training operator based on tilelangβ44Nov 19, 2025Updated 4 months ago
- MFAE-YOLO is an object detection method for aerial remote sensing imagesβ16Jan 27, 2026Updated last month
- Expert Specialization MoE Solution based on CUTLASSβ27Jan 19, 2026Updated 2 months ago
- β55Feb 5, 2026Updated last month
- TileFusion is an experimental C++ macro kernel template library that elevates the abstraction level in CUDA C for tile processing.β106Jun 28, 2025Updated 8 months ago
- Codebase for decoding compressed trust.β25May 7, 2024Updated last year
- Benchmark tests supporting the TiledCUDA library.β18Nov 19, 2024Updated last year
- [ICML2025] SpargeAttention: A training-free sparse attention that accelerates any model inference.β961Feb 25, 2026Updated 3 weeks ago
- β136May 29, 2025Updated 9 months ago
- TileGraph is an experimental DNN compiler that utilizes static code generation and kernel fusion techniques.β11Sep 18, 2024Updated last year
- Algorithms for approximate attention in LLMsβ21Apr 14, 2025Updated 11 months ago
- CUDA SGEMM optimization noteβ15Oct 31, 2023Updated 2 years ago
- Efficient LLM Inference over Long Sequencesβ393Jun 25, 2025Updated 8 months ago
- A collection of memory efficient attention operators implemented in the Triton language.β288Jun 5, 2024Updated last year
- analyse problems of AI with Math and Codeβ27Jul 28, 2025Updated 7 months ago
- A sparse attention kernel supporting mix sparse patternsβ480Jan 18, 2026Updated 2 months ago
- Tile-Based Runtime for Ultra-Low-Latency LLM Inferenceβ683Mar 8, 2026Updated 2 weeks ago
- MoBA: Mixture of Block Attention for Long-Context LLMs