lucidrains / adjacent-attention-networkLinks
Graph neural network message passing reframed as a Transformer with local attention
☆69Updated 2 years ago
Alternatives and similar repositories for adjacent-attention-network
Users that are interested in adjacent-attention-network are comparing it to the libraries listed below
Sorting:
- Implementation of Lie Transformer, Equivariant Self-Attention, in Pytorch☆96Updated 4 years ago
- An implementation of (Induced) Set Attention Block, from the Set Transformers paper☆62Updated 2 years ago
- Implementation of Tranception, an attention network, paired with retrieval, that is SOTA for protein fitness prediction☆32Updated 3 years ago
- A simple implementation of a deep linear Pytorch module☆21Updated 4 years ago
- Implementation of the Remixer Block from the Remixer paper, in Pytorch☆36Updated 4 years ago
- Implementation of "compositional attention" from MILA, a multi-head attention variant that is reframed as a two-step attention process wi…☆51Updated 3 years ago
- Implementation of E(n)-Transformer, which incorporates attention mechanisms into Welling's E(n)-Equivariant Graph Neural Network☆227Updated last year
- Implementation of the Kalman Filtering Attention proposed in "Kalman Filtering Attention for User Behavior Modeling in CTR Prediction"☆59Updated last year
- Implementation of Geometric Vector Perceptron, a simple circuit for 3d rotation equivariance for learning over large biomolecules, in Pyt…☆75Updated 4 years ago
- Axial Positional Embedding for Pytorch☆83Updated 7 months ago
- A GPT, made only of MLPs, in Jax☆58Updated 4 years ago
- A simple Transformer where the softmax has been replaced with normalization☆20Updated 5 years ago
- Implementation of Gated State Spaces, from the paper "Long Range Language Modeling via Gated State Spaces", in Pytorch☆101Updated 2 years ago
- Pytorch reimplementation of Molecule Attention Transformer, which uses a transformer to tackle the graph-like structure of molecules☆58Updated 4 years ago
- An implementation of 2021 paper by Geoffrey Hinton: "How to represent part-whole hierarchies in a neural network" in Pytorch.☆57Updated 4 years ago
- Implementation of Fast Transformer in Pytorch☆177Updated 4 years ago
- Implementation of Token Shift GPT - An autoregressive model that solely relies on shifting the sequence space for mixing☆50Updated 3 years ago
- Official code repository of the paper Linear Transformers Are Secretly Fast Weight Programmers.☆105Updated 4 years ago
- Code for the paper PermuteFormer☆42Updated 4 years ago
- Implementation of Long-Short Transformer, combining local and global inductive biases for attention over long sequences, in Pytorch☆120Updated 4 years ago
- A Pytorch implementation of Attention on Attention module (both self and guided variants), for Visual Question Answering☆42Updated 4 years ago
- Implementation of a Transformer using ReLA (Rectified Linear Attention) from https://arxiv.org/abs/2104.07012☆49Updated 3 years ago
- ☆39Updated 2 years ago
- Authors implementation of LieTransformer: Equivariant Self-Attention for Lie Groups☆36Updated 4 years ago
- Implementation of Discrete Key / Value Bottleneck, in Pytorch☆88Updated 2 years ago
- Implementation of some personal helper functions for Einops, my most favorite tensor manipulation library ❤️☆55Updated 2 years ago
- Implementation of H-Transformer-1D, Hierarchical Attention for Sequence Learning☆165Updated last year
- Pytorch implementation of a simple way to enable (Stochastic) Frame Averaging for any network☆51Updated last year
- A denoising diffusion probabilistic model (DDPM) tailored for conditional generation of protein distograms☆143Updated 3 years ago
- ☆37Updated 4 years ago