Attention Kernels for Symmetric Power Transformers
☆129Sep 25, 2025Updated 5 months ago
Alternatives and similar repositories for power-attention
Users that are interested in power-attention are comparing it to the libraries listed below
Sorting:
- Scalable and Stable Parallelization of Nonlinear RNNS☆29Updated this week
- Reference implementation of "Softmax Attention with Constant Cost per Token" (Heinsen, 2024)☆24Jun 6, 2024Updated last year
- ☆54May 20, 2024Updated last year
- sigma-MoE layer☆21Jan 5, 2024Updated 2 years ago
- 📄Small Batch Size Training for Language Models☆80Oct 4, 2025Updated 5 months ago
- ☆35Apr 12, 2024Updated last year
- ☆58Jul 9, 2024Updated last year
- HGRN2: Gated Linear RNNs with State Expansion☆56Aug 20, 2024Updated last year
- Transformers components but in Triton☆34May 9, 2025Updated 10 months ago
- ☆19Dec 4, 2025Updated 3 months ago
- Code for the paper "Function-Space Learning Rates"☆25Jun 3, 2025Updated 9 months ago
- Code for the paper: https://arxiv.org/pdf/2309.06979.pdf☆21Jul 29, 2024Updated last year
- Griffin MQA + Hawk Linear RNN Hybrid☆89Apr 26, 2024Updated last year
- ☆24Sep 25, 2024Updated last year
- A repository for research on medium sized language models.☆78May 23, 2024Updated last year
- Unofficial implementation of paper : Exploring the Space of Key-Value-Query Models with Intention☆12May 24, 2023Updated 2 years ago
- High-performance tokenized language data-loader for Python C++ extension☆14Jul 22, 2024Updated last year
- Display tensors directly from GPU☆11Oct 12, 2025Updated 4 months ago
- ☆11Oct 11, 2023Updated 2 years ago
- [ICLR'25] "Understanding Bottlenecks of State Space Models through the Lens of Recency and Over-smoothing" by Peihao Wang, Ruisi Cai, Yue…☆17Mar 21, 2025Updated 11 months ago
- POPGym Library in JAX☆12Apr 15, 2024Updated last year
- Implementation of Hyena Hierarchy in JAX☆10Apr 30, 2023Updated 2 years ago
- A fusion of a linear layer and a cross entropy loss, written for pytorch in triton.☆75Aug 2, 2024Updated last year
- [ICML'24 Oral] The official code of "DiJiang: Efficient Large Language Models through Compact Kernelization", a novel DCT-based linear at…☆103Jun 14, 2024Updated last year
- Official repository of paper "RNNs Are Not Transformers (Yet): The Key Bottleneck on In-context Retrieval"☆27Apr 17, 2024Updated last year
- ☆13Dec 15, 2025Updated 2 months ago
- ☆21Oct 22, 2025Updated 4 months ago
- GoldFinch and other hybrid transformer components☆12Dec 9, 2025Updated 3 months ago
- Engineering the state of RNN language models (Mamba, RWKV, etc.)☆32May 25, 2024Updated last year
- FlexAttention w/ FlashAttention3 Support☆27Oct 5, 2024Updated last year
- Here we will test various linear attention designs.☆62Apr 25, 2024Updated last year
- ☆29Feb 27, 2024Updated 2 years ago
- [EMNLP 2023] Official implementation of the algorithm ETSC: Exact Toeplitz-to-SSM Conversion our EMNLP 2023 paper - Accelerating Toeplitz…☆14Oct 17, 2023Updated 2 years ago
- ☆124May 28, 2024Updated last year
- [ICLR 2025 & COLM 2025] Official PyTorch implementation of the Forgetting Transformer and Adaptive Computation Pruning☆141Feb 25, 2026Updated last week
- Awesome Triton Resources☆39Apr 27, 2025Updated 10 months ago
- seqax = sequence modeling + JAX☆187Jul 23, 2025Updated 7 months ago
- Code for exploring Based models from "Simple linear attention language models balance the recall-throughput tradeoff"☆249Jun 6, 2025Updated 9 months ago
- ☆93Jul 5, 2024Updated last year