lucidrains / gateloop-transformer
Implementation of GateLoop Transformer in Pytorch and Jax
☆87Updated 9 months ago
Alternatives and similar repositories for gateloop-transformer:
Users that are interested in gateloop-transformer are comparing it to the libraries listed below
- Implementation of the Kalman Filtering Attention proposed in "Kalman Filtering Attention for User Behavior Modeling in CTR Prediction"☆57Updated last year
- Explorations into the recently proposed Taylor Series Linear Attention☆96Updated 7 months ago
- ☆52Updated 5 months ago
- Implementation of Gated State Spaces, from the paper "Long Range Language Modeling via Gated State Spaces", in Pytorch☆99Updated 2 years ago
- Implementation of the proposed Adam-atan2 from Google Deepmind in Pytorch☆103Updated 4 months ago
- Some personal experiments around routing tokens to different autoregressive attention, akin to mixture-of-experts☆118Updated 5 months ago
- Griffin MQA + Hawk Linear RNN Hybrid☆85Updated 11 months ago
- Sequence Modeling with Multiresolution Convolutional Memory (ICML 2023)☆122Updated last year
- Exploration into the proposed "Self Reasoning Tokens" by Felipe Bonetto☆55Updated 10 months ago
- Standalone Product Key Memory module in Pytorch - for augmenting Transformer models☆78Updated 8 months ago
- Implementation of a Light Recurrent Unit in Pytorch☆47Updated 5 months ago
- ☆79Updated 11 months ago
- Explorations into the proposal from the paper "Grokfast, Accelerated Grokking by Amplifying Slow Gradients"☆98Updated 3 months ago
- Unofficial but Efficient Implementation of "Mamba: Linear-Time Sequence Modeling with Selective State Spaces" in JAX☆83Updated last year
- Implementation of Infini-Transformer in Pytorch☆110Updated 2 months ago
- Implementation of Gradient Agreement Filtering, from Chaubard et al. of Stanford, but for single machine microbatches, in Pytorch☆23Updated 2 months ago
- Tiled Flash Linear Attention library for fast and efficient mLSTM Kernels.☆50Updated last week
- ☆27Updated last year
- CUDA implementation of autoregressive linear attention, with all the latest research findings☆44Updated last year
- Attempt to make multiple residual streams from Bytedance's Hyper-Connections paper accessible to the public☆80Updated last month
- Exploring an idea where one forgets about efficiency and carries out attention across each edge of the nodes (tokens)☆47Updated last week
- Yet another random morning idea to be quickly tried and architecture shared if it works; to allow the transformer to pause for any amount…☆53Updated last year
- A State-Space Model with Rational Transfer Function Representation.☆78Updated 10 months ago
- Pytorch implementation of the PEER block from the paper, Mixture of A Million Experts, by Xu Owen He at Deepmind☆122Updated 7 months ago
- A MAD laboratory to improve AI architecture designs 🧪☆108Updated 3 months ago
- supporting pytorch FSDP for optimizers☆80Updated 3 months ago
- Official repository for the paper "Approximating Two-Layer Feedforward Networks for Efficient Transformers"☆36Updated last year
- LayerNorm(SmallInit(Embedding)) in a Transformer to improve convergence☆60Updated 3 years ago
- ☆30Updated 4 months ago
- ☆39Updated last year