lucidrains / adjacent-attention-networkLinks
Graph neural network message passing reframed as a Transformer with local attention
☆69Updated 2 years ago
Alternatives and similar repositories for adjacent-attention-network
Users that are interested in adjacent-attention-network are comparing it to the libraries listed below
Sorting:
- An implementation of (Induced) Set Attention Block, from the Set Transformers paper☆60Updated 2 years ago
- Implementation of Lie Transformer, Equivariant Self-Attention, in Pytorch☆94Updated 4 years ago
- Implementation of "compositional attention" from MILA, a multi-head attention variant that is reframed as a two-step attention process wi…☆51Updated 3 years ago
- Implementation of Tranception, an attention network, paired with retrieval, that is SOTA for protein fitness prediction☆32Updated 3 years ago
- A simple implementation of a deep linear Pytorch module☆21Updated 4 years ago
- Implementation of the Remixer Block from the Remixer paper, in Pytorch☆36Updated 3 years ago
- Implementation of the Kalman Filtering Attention proposed in "Kalman Filtering Attention for User Behavior Modeling in CTR Prediction"☆58Updated last year
- Implementation of Fast Transformer in Pytorch☆175Updated 4 years ago
- Implementation of E(n)-Transformer, which incorporates attention mechanisms into Welling's E(n)-Equivariant Graph Neural Network☆226Updated last year
- Implementation of Geometric Vector Perceptron, a simple circuit for 3d rotation equivariance for learning over large biomolecules, in Pyt…☆75Updated 4 years ago
- Axial Positional Embedding for Pytorch☆83Updated 6 months ago
- A Pytorch implementation of Attention on Attention module (both self and guided variants), for Visual Question Answering☆43Updated 4 years ago
- Implementation of Gated State Spaces, from the paper "Long Range Language Modeling via Gated State Spaces", in Pytorch☆101Updated 2 years ago
- Implementation of Long-Short Transformer, combining local and global inductive biases for attention over long sequences, in Pytorch☆119Updated 4 years ago
- A simple Transformer where the softmax has been replaced with normalization☆20Updated 4 years ago
- Implementation of a Transformer using ReLA (Rectified Linear Attention) from https://arxiv.org/abs/2104.07012☆49Updated 3 years ago
- A GPT, made only of MLPs, in Jax☆58Updated 4 years ago
- JAX implementation of Learning to learn by gradient descent by gradient descent☆27Updated 3 weeks ago
- Pytorch reimplementation of Molecule Attention Transformer, which uses a transformer to tackle the graph-like structure of molecules☆58Updated 4 years ago
- Implementation of Cross Transformer for spatially-aware few-shot transfer, in Pytorch☆53Updated 4 years ago
- Official code repository of the paper Linear Transformers Are Secretly Fast Weight Programmers.☆105Updated 4 years ago
- Simple notebooks to learn diffusion models on toy datasets☆17Updated 2 years ago
- Local Attention - Flax module for Jax☆22Updated 4 years ago
- Implementation of Discrete Key / Value Bottleneck, in Pytorch☆88Updated 2 years ago
- Implementation of the Triangle Multiplicative module, used in Alphafold2 as an efficient way to mix rows or columns of a 2d feature map, …☆35Updated 4 years ago
- Implementation of Metaformer, but in an autoregressive manner☆27Updated 3 years ago
- Standalone Product Key Memory module in Pytorch - for augmenting Transformer models☆82Updated last year
- Implementation of Kronecker Attention in Pytorch☆19Updated 4 years ago
- An implementation of 2021 paper by Geoffrey Hinton: "How to represent part-whole hierarchies in a neural network" in Pytorch.☆57Updated 4 years ago
- Pytorch implementation of a simple way to enable (Stochastic) Frame Averaging for any network☆50Updated last year