lucidrains / lie-transformer-pytorchLinks
Implementation of Lie Transformer, Equivariant Self-Attention, in Pytorch
☆94Updated 4 years ago
Alternatives and similar repositories for lie-transformer-pytorch
Users that are interested in lie-transformer-pytorch are comparing it to the libraries listed below
Sorting:
- Implementation of Geometric Vector Perceptron, a simple circuit for 3d rotation equivariance for learning over large biomolecules, in Pyt…☆74Updated 4 years ago
- ☆70Updated 2 years ago
- Graph neural network message passing reframed as a Transformer with local attention☆69Updated 2 years ago
- Equivariant Transformer (ET) layers are image-to-image mappings that incorporate prior knowledge on invariances with respect to continuo…☆92Updated 6 years ago
- Implementation of E(n)-Transformer, which incorporates attention mechanisms into Welling's E(n)-Equivariant Graph Neural Network☆225Updated last year
- Authors implementation of LieTransformer: Equivariant Self-Attention for Lie Groups☆36Updated 4 years ago
- An implementation of (Induced) Set Attention Block, from the Set Transformers paper☆60Updated 2 years ago
- LieTransformer: Equivariant Self-Attention for Lie Groups☆72Updated last year
- This repository implements and evaluates convolutional networks on the Möbius strip as toy model instantiations of Coordinate Independent…☆72Updated last year
- To be a next-generation DL-based phenotype prediction from genome mutations.☆19Updated 4 years ago
- Energy-based models for atomic-resolution protein conformations☆99Updated 3 years ago
- Implementation of Tranception, an attention network, paired with retrieval, that is SOTA for protein fitness prediction☆32Updated 3 years ago
- Code repository for the paper "Group Equivariant Stand-Alone Self Attention For Vision" published at ICLR 2021. https://openreview.net/fo…☆29Updated 4 years ago
- Lie Algebra Convolutional Network implementation☆43Updated 3 years ago
- ☆38Updated 2 years ago
- Sample pytorch implementation of Covariant Compositional Networks☆13Updated 7 years ago
- Code for "Bridging the Gap between f-GANs and Wasserstein GANs", ICML 2020☆14Updated 5 years ago
- A simple implementation of a deep linear Pytorch module☆21Updated 4 years ago
- Implementation of Kronecker Attention in Pytorch☆19Updated 4 years ago
- reproduces experiments from "Grounding inductive biases in natural images: invariance stems from variations in data"☆17Updated 10 months ago
- pytorch implementation of trDesign☆45Updated 4 years ago
- Pytorch reimplementation of Molecule Attention Transformer, which uses a transformer to tackle the graph-like structure of molecules☆59Updated 4 years ago
- A simple Transformer where the softmax has been replaced with normalization☆20Updated 4 years ago
- Official implementation of the paper "Topographic VAEs learn Equivariant Capsules"☆80Updated 3 years ago
- Implementation of Invariant Point Attention, used for coordinate refinement in the structure module of Alphafold2, as a standalone Pytorc…☆162Updated 2 years ago
- ☆41Updated 2 years ago
- Reference implementation for "Group Equivariant Subsampling"☆16Updated 3 years ago
- ☆50Updated 4 years ago
- [ICML 2020] Differentiating through the Fréchet Mean (https://arxiv.org/abs/2003.00335).☆56Updated 3 years ago
- A pytorch implementation for the LSTM experiments in the paper: Why Gradient Clipping Accelerates Training: A Theoretical Justification f…☆46Updated 5 years ago