sustcsonglin / flash-linear-rnn
Implementations of various linear RNN layers using pytorch and triton
☆50Updated last year
Alternatives and similar repositories for flash-linear-rnn:
Users that are interested in flash-linear-rnn are comparing it to the libraries listed below
- Non official implementation of the Linear Recurrent Unit (LRU, Orvieto et al. 2023)☆52Updated 4 months ago
- Parallelizing non-linear sequential models over the sequence length☆51Updated 2 months ago
- Sequence Modeling with Multiresolution Convolutional Memory (ICML 2023)☆122Updated last year
- Pytorch implementation of Simplified Structured State-Spaces for Sequence Modeling (S5)☆76Updated 11 months ago
- Sequence Modeling with Structured State Spaces☆63Updated 2 years ago
- Unofficial implementation of Linear Recurrent Units, by Deepmind, in Pytorch☆68Updated last year
- ☆27Updated 8 months ago
- A State-Space Model with Rational Transfer Function Representation.☆78Updated 10 months ago
- ☆52Updated 5 months ago
- ☆39Updated last year
- Implementation of GateLoop Transformer in Pytorch and Jax☆87Updated 9 months ago
- Accelerated First Order Parallel Associative Scan☆177Updated 7 months ago
- PyTorch implementation of Structured State Space for Sequence Modeling (S4), based on Annotated S4.☆77Updated last year
- [NeurIPS 2023 spotlight] Official implementation of HGRN in our NeurIPS 2023 paper - Hierarchically Gated Recurrent Neural Network for Se…☆64Updated 11 months ago
- The accompanying code for "Simplifying and Understanding State Space Models with Diagonal Linear RNNs" (Ankit Gupta, Harsh Mehta, Jonatha…☆20Updated 2 years ago
- HGRN2: Gated Linear RNNs with State Expansion☆53Updated 7 months ago
- ☆164Updated 2 years ago
- Implementation of Gated State Spaces, from the paper "Long Range Language Modeling via Gated State Spaces", in Pytorch☆99Updated 2 years ago
- A PyTorch wrapper of parallel exclusive scan in CUDA☆11Updated last year
- ☆23Updated 6 months ago
- ☆47Updated last year
- ☆33Updated last year
- Implementation of Griffin from the paper: "Griffin: Mixing Gated Linear Recurrences with Local Attention for Efficient Language Models"☆52Updated 2 months ago
- [NeurIPS 2022] Your Transformer May Not be as Powerful as You Expect (official implementation)☆34Updated last year
- Explorations into the recently proposed Taylor Series Linear Attention☆95Updated 7 months ago
- ☆30Updated 4 months ago
- Curse-of-memory phenomenon of RNNs in sequence modelling☆19Updated last week
- Transformers with doubly stochastic attention☆45Updated 2 years ago
- Unofficial but Efficient Implementation of "Mamba: Linear-Time Sequence Modeling with Selective State Spaces" in JAX☆83Updated last year
- ☆30Updated 5 months ago