lucidrains / gateloop-transformer
Implementation of GateLoop Transformer in Pytorch and Jax
☆87Updated 8 months ago
Alternatives and similar repositories for gateloop-transformer:
Users that are interested in gateloop-transformer are comparing it to the libraries listed below
- Some personal experiments around routing tokens to different autoregressive attention, akin to mixture-of-experts☆116Updated 4 months ago
- Explorations into the recently proposed Taylor Series Linear Attention☆93Updated 6 months ago
- Implementation of the Kalman Filtering Attention proposed in "Kalman Filtering Attention for User Behavior Modeling in CTR Prediction"☆57Updated last year
- Implementation of Infini-Transformer in Pytorch☆109Updated last month
- Sequence Modeling with Multiresolution Convolutional Memory (ICML 2023)☆122Updated last year
- Implementation of Gated State Spaces, from the paper "Long Range Language Modeling via Gated State Spaces", in Pytorch☆98Updated 2 years ago
- A MAD laboratory to improve AI architecture designs 🧪☆105Updated 2 months ago
- LayerNorm(SmallInit(Embedding)) in a Transformer to improve convergence☆59Updated 3 years ago
- Standalone Product Key Memory module in Pytorch - for augmenting Transformer models☆78Updated 7 months ago
- ☆52Updated 4 months ago
- Exploration into the proposed "Self Reasoning Tokens" by Felipe Bonetto☆55Updated 9 months ago
- Unofficial but Efficient Implementation of "Mamba: Linear-Time Sequence Modeling with Selective State Spaces" in JAX☆82Updated last year
- CUDA implementation of autoregressive linear attention, with all the latest research findings☆44Updated last year
- Implementation of Gradient Agreement Filtering, from Chaubard et al. of Stanford, but for single machine microbatches, in Pytorch☆23Updated last month
- ☆27Updated last year
- Transformer with Mu-Parameterization, implemented in Jax/Flax. Supports FSDP on TPU pods.☆30Updated 2 months ago
- Yet another random morning idea to be quickly tried and architecture shared if it works; to allow the transformer to pause for any amount…☆53Updated last year
- Explorations into the proposal from the paper "Grokfast, Accelerated Grokking by Amplifying Slow Gradients"☆96Updated 2 months ago
- Implementation of a Light Recurrent Unit in Pytorch☆47Updated 4 months ago
- Implementation of the proposed Adam-atan2 from Google Deepmind in Pytorch☆101Updated 3 months ago
- Sequence Modeling with Structured State Spaces☆63Updated 2 years ago
- Griffin MQA + Hawk Linear RNN Hybrid☆85Updated 10 months ago
- Official repository for the paper "Approximating Two-Layer Feedforward Networks for Efficient Transformers"☆36Updated last year
- Implementation of 🌻 Mirasol, SOTA Multimodal Autoregressive model out of Google Deepmind, in Pytorch☆88Updated last year
- Triton Implementation of HyperAttention Algorithm☆47Updated last year
- Accelerated First Order Parallel Associative Scan☆172Updated 6 months ago
- some common Huggingface transformers in maximal update parametrization (µP)☆79Updated 2 years ago
- A State-Space Model with Rational Transfer Function Representation.☆77Updated 9 months ago
- Engineering the state of RNN language models (Mamba, RWKV, etc.)☆32Updated 9 months ago
- ☆78Updated 10 months ago