lucidrains / adjacent-attention-network
Graph neural network message passing reframed as a Transformer with local attention
☆67Updated 2 years ago
Alternatives and similar repositories for adjacent-attention-network:
Users that are interested in adjacent-attention-network are comparing it to the libraries listed below
- Implementation of Lie Transformer, Equivariant Self-Attention, in Pytorch☆88Updated 3 years ago
- An implementation of (Induced) Set Attention Block, from the Set Transformers paper☆56Updated 2 years ago
- Implementation of Geometric Vector Perceptron, a simple circuit for 3d rotation equivariance for learning over large biomolecules, in Pyt…☆72Updated 3 years ago
- Implementation of Tranception, an attention network, paired with retrieval, that is SOTA for protein fitness prediction☆31Updated 2 years ago
- Official Pytorch Implementation of GraphiT☆107Updated 3 years ago
- Code for the paper PermuteFormer☆42Updated 3 years ago
- Implementation of E(n)-Transformer, which incorporates attention mechanisms into Welling's E(n)-Equivariant Graph Neural Network☆216Updated 7 months ago
- A simple implementation of a deep linear Pytorch module☆19Updated 4 years ago
- Implementation of "compositional attention" from MILA, a multi-head attention variant that is reframed as a two-step attention process wi…☆50Updated 2 years ago
- ☆34Updated 4 years ago
- ☆36Updated 4 years ago
- Implementation of the Remixer Block from the Remixer paper, in Pytorch☆35Updated 3 years ago
- Implementation of Denoising Diffusion for protein design, but using the new Equiformer (successor to SE3 Transformers) with some addition…☆56Updated 2 years ago
- Pytorch reimplementation of Molecule Attention Transformer, which uses a transformer to tackle the graph-like structure of molecules☆58Updated 4 years ago
- Implementation of Gated State Spaces, from the paper "Long Range Language Modeling via Gated State Spaces", in Pytorch☆97Updated last year
- Transformers are Graph Neural Networks!☆51Updated 4 years ago
- Low Rank Global Attention for Graph Neural Networks☆12Updated 4 years ago
- Implementation of Token Shift GPT - An autoregressive model that solely relies on shifting the sequence space for mixing☆47Updated 2 years ago
- Official repository for the paper "Going Beyond Linear Transformers with Recurrent Fast Weight Programmers" (NeurIPS 2021)☆48Updated last year
- [ICML 2021] GraphNorm: A Principled Approach to Accelerating Graph Neural Network Training (official implementation)☆103Updated 2 years ago
- [CIKM-21] Pytorch implementation of LiteGT: Efficient and Lightweight Graph Transformers☆11Updated 3 years ago
- A GPT, made only of MLPs, in Jax☆57Updated 3 years ago
- ☆36Updated 2 years ago
- Implementation of the Kalman Filtering Attention proposed in "Kalman Filtering Attention for User Behavior Modeling in CTR Prediction"☆57Updated last year
- Implementation of a Transformer using ReLA (Rectified Linear Attention) from https://arxiv.org/abs/2104.07012☆49Updated 2 years ago
- A pytorch realization of adafactor (https://arxiv.org/pdf/1804.04235.pdf )☆23Updated 5 years ago
- Pytorch implementation of a simple way to enable (Stochastic) Frame Averaging for any network☆48Updated 5 months ago
- Implementation of TableFormer, Robust Transformer Modeling for Table-Text Encoding, in Pytorch☆37Updated 2 years ago