FlashRNN - Fast RNN Kernels with I/O Awareness
☆175Oct 20, 2025Updated 4 months ago
Alternatives and similar repositories for flashrnn
Users that are interested in flashrnn are comparing it to the libraries listed below
Sorting:
- Official Code Repository for the paper "Key-value memory in the brain"☆31Feb 25, 2025Updated last year
- Awesome Triton Resources☆39Apr 27, 2025Updated 10 months ago
- Quantized Attention on GPU☆44Nov 22, 2024Updated last year
- Tiled Flash Linear Attention library for fast and efficient mLSTM Kernels.☆87Updated this week
- Framework to reduce autotune overhead to zero for well known deployments.☆97Sep 19, 2025Updated 5 months ago
- Efficient PScan implementation in PyTorch☆17Jan 2, 2024Updated 2 years ago
- ☆13Dec 15, 2025Updated 2 months ago
- Scalable and Stable Parallelization of Nonlinear RNNS☆29Oct 21, 2025Updated 4 months ago
- FlexAttention w/ FlashAttention3 Support☆27Oct 5, 2024Updated last year
- Open-sourcing code associated with the AAAI-25 paper "On the Expressiveness and Length Generalization of Selective State-Space Models on …☆14Sep 18, 2025Updated 5 months ago
- Code for the paper: https://arxiv.org/pdf/2309.06979.pdf☆21Jul 29, 2024Updated last year
- ☆65Apr 26, 2025Updated 10 months ago
- PyTorch implementation of the Flash Spectral Transform Unit.☆21Sep 19, 2024Updated last year
- Reference implementation of "Softmax Attention with Constant Cost per Token" (Heinsen, 2024)☆24Jun 6, 2024Updated last year
- Parallel Associative Scan for Language Models☆18Jan 8, 2024Updated 2 years ago
- HGRN2: Gated Linear RNNs with State Expansion☆56Aug 20, 2024Updated last year
- An experimental communicating attention kernel based on DeepEP.☆35Jul 29, 2025Updated 7 months ago
- APPy (Annotated Parallelism for Python) enables users to annotate loops and tensor expressions in Python with compiler directives akin to…☆30Jan 28, 2026Updated last month
- [ICLR 2025] Official PyTorch Implementation of Gated Delta Networks: Improving Mamba2 with Delta Rule