lucidrains / lie-transformer-pytorchLinks
Implementation of Lie Transformer, Equivariant Self-Attention, in Pytorch
☆95Updated 4 years ago
Alternatives and similar repositories for lie-transformer-pytorch
Users that are interested in lie-transformer-pytorch are comparing it to the libraries listed below
Sorting:
- Implementation of Geometric Vector Perceptron, a simple circuit for 3d rotation equivariance for learning over large biomolecules, in Pyt…☆75Updated 4 years ago
- Graph neural network message passing reframed as a Transformer with local attention☆69Updated 2 years ago
- Implementation of E(n)-Transformer, which incorporates attention mechanisms into Welling's E(n)-Equivariant Graph Neural Network☆226Updated last year
- LieTransformer: Equivariant Self-Attention for Lie Groups☆73Updated last year
- ☆70Updated 2 years ago
- Equivariant Transformer (ET) layers are image-to-image mappings that incorporate prior knowledge on invariances with respect to continuo…☆92Updated 6 years ago
- An implementation of (Induced) Set Attention Block, from the Set Transformers paper☆61Updated 2 years ago
- Authors implementation of LieTransformer: Equivariant Self-Attention for Lie Groups☆36Updated 4 years ago
- Deterministic Decoding for Discrete Data in Variational Autoencoders☆24Updated 5 years ago
- Lie Algebra Convolutional Network implementation☆43Updated 3 years ago
- Sample pytorch implementation of Covariant Compositional Networks☆13Updated 7 years ago
- Energy-based models for atomic-resolution protein conformations☆99Updated 3 years ago
- Implementation of Tranception, an attention network, paired with retrieval, that is SOTA for protein fitness prediction☆32Updated 3 years ago
- This repository implements and evaluates convolutional networks on the Möbius strip as toy model instantiations of Coordinate Independent…☆72Updated last year
- ☆38Updated 2 years ago
- Implementation of Kronecker Attention in Pytorch☆19Updated 5 years ago
- Pytorch reimplementation of Molecule Attention Transformer, which uses a transformer to tackle the graph-like structure of molecules☆58Updated 4 years ago
- ICML 2020 Paper: Latent Variable Modelling with Hyperbolic Normalizing Flows☆54Updated 2 years ago
- [ICML 2020] Differentiating through the Fréchet Mean (https://arxiv.org/abs/2003.00335).☆58Updated 3 years ago
- Code repository for the paper "Group Equivariant Stand-Alone Self Attention For Vision" published at ICLR 2021. https://openreview.net/fo…☆29Updated 4 years ago
- A TensorFlow implementation of the paper 'Set Transformer: A Framework for Attention-based Permutation-Invariant Neural Networks'☆31Updated last year
- A Prior of a Googol Gaussians: a Tensor Ring Induced Prior for Generative Models☆28Updated last year
- Implementation of Invariant Point Attention, used for coordinate refinement in the structure module of Alphafold2, as a standalone Pytorc…☆165Updated 2 years ago
- JAX implementation of Graph Attention Networks☆13Updated 3 years ago
- Implementation of the Triangle Multiplicative module, used in Alphafold2 as an efficient way to mix rows or columns of a 2d feature map, …☆36Updated 4 years ago
- [ICLR 2020] FSPool: Learning Set Representations with Featurewise Sort Pooling☆42Updated last year
- Official code repository of the paper Linear Transformers Are Secretly Fast Weight Programmers.☆105Updated 4 years ago
- Official repository for our ICLR 2021 paper Evaluating the Disentanglement of Deep Generative Models with Manifold Topology☆36Updated 4 years ago
- ☆30Updated 4 years ago
- To be a next-generation DL-based phenotype prediction from genome mutations.☆19Updated 4 years ago