FlashRNN - Fast RNN Kernels with I/O Awareness
☆177Oct 20, 2025Updated 5 months ago
Alternatives and similar repositories for flashrnn
Users that are interested in flashrnn are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Official Code Repository for the paper "Key-value memory in the brain"☆31Feb 25, 2025Updated last year
- Scalable and Stable Parallelization of Nonlinear RNNS☆29Mar 6, 2026Updated 2 weeks ago
- Awesome Triton Resources☆39Apr 27, 2025Updated 10 months ago
- Tiled Flash Linear Attention library for fast and efficient mLSTM Kernels.☆87Mar 18, 2026Updated last week
- Quantized Attention on GPU☆44Nov 22, 2024Updated last year
- FlexAttention w/ FlashAttention3 Support☆27Oct 5, 2024Updated last year
- Framework to reduce autotune overhead to zero for well known deployments.☆97Sep 19, 2025Updated 6 months ago
- Efficient PScan implementation in PyTorch☆17Jan 2, 2024Updated 2 years ago
- Code for the paper: https://arxiv.org/pdf/2309.06979.pdf☆21Jul 29, 2024Updated last year
- ☆13Dec 15, 2025Updated 3 months ago
- Standalone Flash Attention v2 kernel without libtorch dependency☆112Sep 10, 2024Updated last year
- Combining SOAP and MUON☆19Feb 11, 2025Updated last year
- Open-sourcing code associated with the AAAI-25 paper "On the Expressiveness and Length Generalization of Selective State-Space Models on …☆16Sep 18, 2025Updated 6 months ago
- Accelerated First Order Parallel Associative Scan☆196Jan 7, 2026Updated 2 months ago
- An experimental communicating attention kernel based on DeepEP.☆35Jul 29, 2025Updated 7 months ago
- [ICLR 2025] Official PyTorch Implementation of Gated Delta Networks: Improving Mamba2 with Delta Rule☆516Mar 13, 2026Updated last week
- ☆65Apr 26, 2025Updated 10 months ago
- Parallel Associative Scan for Language Models☆18Jan 8, 2024Updated 2 years ago
- Official JAX implementation of xLSTM including fast and efficient training and inference code. 7B model available at https://huggingface.…☆105Jan 8, 2025Updated last year
- ☆107Mar 9, 2024Updated 2 years ago
- HGRN2: Gated Linear RNNs with State Expansion☆56Aug 20, 2024Updated last year
- A method for evaluating the high-level coherence of machine-generated texts. Identifies high-level coherence issues in transformer-based …☆11Mar 18, 2023Updated 3 years ago
- Official repository of the xLSTM.☆2,131Nov 4, 2025Updated 4 months ago
- Code for the paper "Stack Attention: Improving the Ability of Transformers to Model Hierarchical Patterns"☆18Mar 15, 2024Updated 2 years ago
- PyTorch implementation of the Flash Spectral Transform Unit.☆22Sep 19, 2024Updated last year
- APPy (Annotated Parallelism for Python) enables users to annotate loops and tensor expressions in Python with compiler directives akin to…☆30Updated this week
- ☆19Dec 4, 2025Updated 3 months ago
- Reference implementation of "Softmax Attention with Constant Cost per Token" (Heinsen, 2024)☆24Jun 6, 2024Updated last year
- Xmixers: A collection of SOTA efficient token/channel mixers☆28Sep 4, 2025Updated 6 months ago
- Official Project Page for HLA: Higher-order Linear Attention (https://arxiv.org/abs/2510.27258)☆45Jan 6, 2026Updated 2 months ago
- Here we will test various linear attention designs.☆62Apr 25, 2024Updated last year
- Engineering the state of RNN language models (Mamba, RWKV, etc.)☆32May 25, 2024Updated last year
- ☆119May 19, 2025Updated 10 months ago
- ☆20May 30, 2024Updated last year
- ☆51Jan 28, 2024Updated 2 years ago
- Implementations of various linear RNN layers using pytorch and triton☆55Aug 4, 2023Updated 2 years ago
- ☆58Jul 9, 2024Updated last year
- ☆35Nov 22, 2024Updated last year
- PyTorch implementation for PaLM: A Hybrid Parser and Language Model.☆10Jan 7, 2020Updated 6 years ago