lucidrains / adjacent-attention-network
Graph neural network message passing reframed as a Transformer with local attention
☆68Updated 2 years ago
Alternatives and similar repositories for adjacent-attention-network
Users that are interested in adjacent-attention-network are comparing it to the libraries listed below
Sorting:
- An implementation of (Induced) Set Attention Block, from the Set Transformers paper☆56Updated 2 years ago
- Implementation of Lie Transformer, Equivariant Self-Attention, in Pytorch☆92Updated 4 years ago
- Implementation of E(n)-Transformer, which incorporates attention mechanisms into Welling's E(n)-Equivariant Graph Neural Network☆220Updated 11 months ago
- Implementation of Tranception, an attention network, paired with retrieval, that is SOTA for protein fitness prediction☆32Updated 2 years ago
- Implementation of Geometric Vector Perceptron, a simple circuit for 3d rotation equivariance for learning over large biomolecules, in Pyt…☆74Updated 3 years ago
- Code for the paper PermuteFormer☆42Updated 3 years ago
- Implementation of Gated State Spaces, from the paper "Long Range Language Modeling via Gated State Spaces", in Pytorch☆100Updated 2 years ago
- Pytorch reimplementation of Molecule Attention Transformer, which uses a transformer to tackle the graph-like structure of molecules