Dao-AILab / grouped-latent-attentionView external linksLinks
☆131May 29, 2025Updated 8 months ago
Alternatives and similar repositories for grouped-latent-attention
Users that are interested in grouped-latent-attention are comparing it to the libraries listed below
Sorting:
- ☆22May 5, 2025Updated 9 months ago
- [ICML 2025] XAttention: Block Sparse Attention with Antidiagonal Scoring☆269Jul 6, 2025Updated 7 months ago
- ☆223Nov 19, 2025Updated 2 months ago
- An efficient implementation of the NSA (Native Sparse Attention) kernel☆129Jun 24, 2025Updated 7 months ago
- ☆52May 19, 2025Updated 8 months ago
- ☆61Nov 27, 2023Updated 2 years ago
- The evaluation framework for training-free sparse attention in LLMs☆119Jan 27, 2026Updated 3 weeks ago
- [ICML 2025] LaCache: Ladder-Shaped KV Caching for Efficient Long-Context Modeling of Large Language Models☆17Nov 4, 2025Updated 3 months ago
- Quantized Attention on GPU☆44Nov 22, 2024Updated last year
- ☆15Jan 12, 2026Updated last month
- The official repository for SkyLadder: Better and Faster Pretraining via Context Window Scheduling☆42Dec 29, 2025Updated last month
- Code for the paper "Function-Space Learning Rates"☆25Jun 3, 2025Updated 8 months ago
- ☆271Jun 6, 2025Updated 8 months ago
- study of cutlass☆22Nov 10, 2024Updated last year
- ☆27Jul 28, 2025Updated 6 months ago
- Repository for the Q-Filters method (https://arxiv.org/pdf/2503.02812)☆35Mar 7, 2025Updated 11 months ago
- ☆14Mar 2, 2025Updated 11 months ago
- ☆53May 20, 2024Updated last year
- VideoNSA: Native Sparse Attention Scales Video Understanding☆78Nov 16, 2025Updated 3 months ago
- Odysseus: Playground of LLM Sequence Parallelism☆79Jun 17, 2024Updated last year
- ☆121Feb 4, 2026Updated 2 weeks ago
- Reference implementation of "Softmax Attention with Constant Cost per Token" (Heinsen, 2024)☆24Jun 6, 2024Updated last year
- Helpful tools and examples for working with flex-attention☆1,127Feb 8, 2026Updated last week
- 🐳 Efficient Triton implementations for "Native Sparse Attention: Hardware-Aligned and Natively Trainable Sparse Attention"☆965Feb 5, 2026Updated last week
- ☆15Sep 22, 2024Updated last year
- Benchmark tests supporting the TiledCUDA library.☆18Nov 19, 2024Updated last year
- Official Implementation of our paper "THOR: Tool-Integrated Hierarchical Optimization via RL for Mathematical Reasoning".☆31Sep 19, 2025Updated 4 months ago
- ☆65Apr 26, 2025Updated 9 months ago
- Flash-Muon: An Efficient Implementation of Muon Optimizer☆235Jun 15, 2025Updated 8 months ago
- An experimental communicating attention kernel based on DeepEP.☆35Jul 29, 2025Updated 6 months ago
- Distributed Compiler based on Triton for Parallel Systems☆1,358Updated this week
- ☆579Sep 23, 2025Updated 4 months ago
- User-friendly implementation of the Mixture-of-Sparse-Attention (MoSA). MoSA selects distinct tokens for each head with expert choice rou…☆28May 3, 2025Updated 9 months ago
- Linear Attention Sequence Parallelism (LASP)☆88Jun 4, 2024Updated last year
- [NeurIPS 2025 Spotlight] TPA: Tensor ProducT ATTenTion Transformer (T6) (https://arxiv.org/abs/2501.06425)☆446Jan 26, 2026Updated 3 weeks ago
- TransMLA: Multi-Head Latent Attention Is All You Need (NeurIPS 2025 Spotlight)☆430Sep 23, 2025Updated 4 months ago
- [ICCV 2025] Dynamic-VLM☆28Dec 16, 2024Updated last year
- Code for Draft Attention☆99May 22, 2025Updated 8 months ago
- DeeperGEMM: crazy optimized version☆74May 5, 2025Updated 9 months ago