lucidrains / adjacent-attention-networkLinks
Graph neural network message passing reframed as a Transformer with local attention
☆70Updated 3 years ago
Alternatives and similar repositories for adjacent-attention-network
Users that are interested in adjacent-attention-network are comparing it to the libraries listed below
Sorting:
- Implementation of Lie Transformer, Equivariant Self-Attention, in Pytorch☆97Updated 4 years ago
- An implementation of (Induced) Set Attention Block, from the Set Transformers paper☆65Updated 2 years ago
- A simple implementation of a deep linear Pytorch module☆21Updated 5 years ago
- Implementation of Tranception, an attention network, paired with retrieval, that is SOTA for protein fitness prediction☆32Updated 3 years ago
- Implementation of E(n)-Transformer, which incorporates attention mechanisms into Welling's E(n)-Equivariant Graph Neural Network☆226Updated last year
- Implementation of the Kalman Filtering Attention proposed in "Kalman Filtering Attention for User Behavior Modeling in CTR Prediction"☆59Updated 2 years ago
- Implementation of Geometric Vector Perceptron, a simple circuit for 3d rotation equivariance for learning over large biomolecules, in Pyt…☆76Updated 4 years ago
- Implementation of "compositional attention" from MILA, a multi-head attention variant that is reframed as a two-step attention process wi…☆51Updated 3 years ago
- Implementation of the Remixer Block from the Remixer paper, in Pytorch☆36Updated 4 years ago
- Implementation of Gated State Spaces, from the paper "Long Range Language Modeling via Gated State Spaces", in Pytorch☆101Updated 2 years ago
- Axial Positional Embedding for Pytorch☆84Updated 10 months ago
- Pytorch reimplementation of Molecule Attention Transformer, which uses a transformer to tackle the graph-like structure of molecules☆58Updated 5 years ago
- A simple Transformer where the softmax has been replaced with normalization☆20Updated 5 years ago
- An implementation of 2021 paper by Geoffrey Hinton: "How to represent part-whole hierarchies in a neural network" in Pytorch.☆57Updated 4 years ago
- Official code repository of the paper Linear Transformers Are Secretly Fast Weight Programmers.☆111Updated 4 years ago
- Implementation of H-Transformer-1D, Hierarchical Attention for Sequence Learning☆167Updated last year
- Implementation of the Triangle Multiplicative module, used in Alphafold2 as an efficient way to mix rows or columns of a 2d feature map, …☆39Updated 4 years ago
- Standalone Product Key Memory module in Pytorch - for augmenting Transformer models☆87Updated last month
- Implementation of Long-Short Transformer, combining local and global inductive biases for attention over long sequences, in Pytorch☆120Updated 4 years ago
- Implementation of Denoising Diffusion for protein design, but using the new Equiformer (successor to SE3 Transformers) with some addition…☆57Updated 3 years ago
- Authors implementation of LieTransformer: Equivariant Self-Attention for Lie Groups☆36Updated 4 years ago
- Implementation of Token Shift GPT - An autoregressive model that solely relies on shifting the sequence space for mixing☆50Updated 3 years ago
- Implementation of Fast Transformer in Pytorch☆177Updated 4 years ago
- ☆38Updated 5 years ago
- ☆39Updated 3 years ago
- Implementation of a Transformer using ReLA (Rectified Linear Attention) from https://arxiv.org/abs/2104.07012☆49Updated 3 years ago
- Implementation of Kronecker Attention in Pytorch☆19Updated 5 years ago
- Local Attention - Flax module for Jax☆22Updated 4 years ago
- Implementation of TableFormer, Robust Transformer Modeling for Table-Text Encoding, in Pytorch☆39Updated 3 years ago
- A Pytorch implementation of Attention on Attention module (both self and guided variants), for Visual Question Answering☆43Updated 5 years ago