PiotrNawrot / nano-sparse-attentionView external linksLinks
The simplest implementation of recent Sparse Attention patterns for efficient LLM inference.
☆92Jul 17, 2025Updated 6 months ago
Alternatives and similar repositories for nano-sparse-attention
Users that are interested in nano-sparse-attention are comparing it to the libraries listed below
Sorting:
- Efficient Transformers with Dynamic Token Pooling☆67May 20, 2023Updated 2 years ago
- ☆11Oct 11, 2023Updated 2 years ago
- The evaluation framework for training-free sparse attention in LLMs☆119Jan 27, 2026Updated 3 weeks ago
- ☆20May 30, 2024Updated last year
- My Implementation of Q-Sparse: All Large Language Models can be Fully Sparsely-Activated☆33Aug 14, 2024Updated last year
- An efficient implementation of the NSA (Native Sparse Attention) kernel☆129Jun 24, 2025Updated 7 months ago
- Official Implementation of APB (ACL 2025 main Oral) and Spava.☆33Jan 30, 2026Updated 2 weeks ago
- Code for the paper "Stack Attention: Improving the Ability of Transformers to Model Hierarchical Patterns"☆18Mar 15, 2024Updated last year
- [ACL'24 Oral] Analysing The Impact of Sequence Composition on Language Model Pre-Training☆23Aug 18, 2024Updated last year
- RADLADS training code☆37May 7, 2025Updated 9 months ago
- [EMNLP 2023] Official implementation of the algorithm ETSC: Exact Toeplitz-to-SSM Conversion our EMNLP 2023 paper - Accelerating Toeplitz…☆14Oct 17, 2023Updated 2 years ago
- Reference implementation of "Softmax Attention with Constant Cost per Token" (Heinsen, 2024)☆24Jun 6, 2024Updated last year
- Checkpointable dataset utilities for foundation model training☆32Jan 29, 2024Updated 2 years ago
- ☆16Dec 19, 2024Updated last year
- train with kittens!☆63Oct 25, 2024Updated last year
- Source-to-Source Debuggable Derivatives in Pure Python☆15Jan 23, 2024Updated 2 years ago
- Expanding linear RNN state-transition matrix eigenvalues to include negatives improves state-tracking tasks and language modeling without…☆20Mar 15, 2025Updated 11 months ago
- FlexAttention w/ FlashAttention3 Support☆27Oct 5, 2024Updated last year
- ☆37Oct 11, 2025Updated 4 months ago
- ☆33Jul 9, 2025Updated 7 months ago
- ☆15Jun 4, 2024Updated last year
- The official repository for SkyLadder: Better and Faster Pretraining via Context Window Scheduling☆42Dec 29, 2025Updated last month
- Awesome Triton Resources☆39Apr 27, 2025Updated 9 months ago
- [ICLR 2025] DuoAttention: Efficient Long-Context LLM Inference with Retrieval and Streaming Heads☆524Feb 10, 2025Updated last year
- Learning to Model Editing Processes☆26Aug 3, 2025Updated 6 months ago
- Algorithms for approximate attention in LLMs☆21Apr 14, 2025Updated 10 months ago
- Code for NeurIPS 2024 Spotlight: "Scaling Laws and Compute-Optimal Training Beyond Fixed Training Durations"☆89Oct 30, 2024Updated last year
- HGRN2: Gated Linear RNNs with State Expansion☆56Aug 20, 2024Updated last year
- Transformers components but in Triton☆34May 9, 2025Updated 9 months ago
- [NeurIPS 2023] Sparse Modular Activation for Efficient Sequence Modeling☆40Dec 2, 2023Updated 2 years ago
- Efficient PScan implementation in PyTorch☆17Jan 2, 2024Updated 2 years ago
- AGaLiTe: Approximate Gated Linear Transformers for Online Reinforcement Learning (Published in TMLR)☆23Oct 15, 2024Updated last year
- The code for creating the iGSM datasets in papers "Physics of Language Models Part 2.1, Grade-School Math and the Hidden Reasoning Proces…☆84Jan 12, 2025Updated last year
- ☆19Dec 4, 2025Updated 2 months ago
- Language models scale reliably with over-training and on downstream tasks☆99Apr 2, 2024Updated last year
- Triton implement of bi-directional (non-causal) linear attention☆65Feb 2, 2026Updated 2 weeks ago
- The open-source materials for paper "Sparsing Law: Towards Large Language Models with Greater Activation Sparsity".☆30Nov 12, 2024Updated last year
- ☆20Nov 4, 2025Updated 3 months ago
- Symphony — A decentralized multi-agent framework that enables intelligent agents to collaborate seamlessly across heterogeneous edge devi…☆30Oct 30, 2025Updated 3 months ago