gdewael / bio-attentionLinks
Simple implementations of attention modules adapted for the biological data domain.
☆13Updated 3 months ago
Alternatives and similar repositories for bio-attention
Users that are interested in bio-attention are comparing it to the libraries listed below
Sorting:
- Implementation of H-Transformer-1D, Hierarchical Attention for Sequence Learning☆163Updated last year
- ☆14Updated 3 years ago
- Usable implementation of Mogrifier, a circuit for enhancing LSTMs and potentially other networks, from Deepmind☆19Updated last year
- Pytorch reimplementation of Molecule Attention Transformer, which uses a transformer to tackle the graph-like structure of molecules☆58Updated 4 years ago
- Implementation of the algorithm detailed in paper "Evolutionary design of molecules based on deep learning and a genetic algorithm"☆23Updated last year
- A simple implementation of a deep linear Pytorch module☆21Updated 4 years ago
- Implementation of Denoising Diffusion for protein design, but using the new Equiformer (successor to SE3 Transformers) with some addition…☆57Updated 2 years ago
- Implementation of the Triangle Multiplicative module, used in Alphafold2 as an efficient way to mix rows or columns of a 2d feature map, …☆35Updated 4 years ago
- Implementation of Tranception, an attention network, paired with retrieval, that is SOTA for protein fitness prediction☆32Updated 3 years ago
- Implementation of Insertion-deletion Denoising Diffusion Probabilistic Models☆30Updated 3 years ago
- This repository contains code for reproducing results in our paper Interpreting Potts and Transformer Protein Models Through the Lens of …☆58Updated 2 years ago
- Unofficial PyTorch implementation of Fastformer based on paper "Fastformer: Additive Attention Can Be All You Need"."☆133Updated 3 years ago
- Tensorflow implementation of a linear attention architecture☆44Updated 4 years ago
- A PyTorch implementation of Bayesian flow networks (Graves et al., 2023).☆27Updated last year
- Implementation of the Paper: "Parameterized Hypercomplex Graph Neural Networks for Graph Classification" by Tuan Le, Marco Bertolini, Fra…☆32Updated 4 years ago
- Standalone Product Key Memory module in Pytorch - for augmenting Transformer models☆82Updated last year
- Unofficial PyTorch implementation of Google's FNet: Mixing Tokens with Fourier Transforms. With checkpoints.☆76Updated 2 years ago
- Course Project for CS224W at Stanford☆22Updated 3 years ago
- Axial Positional Embedding for Pytorch☆83Updated 6 months ago
- ☆18Updated 3 years ago
- Time-Aware Transformer-based Network for Clinical Notes Series Prediction☆23Updated last year
- Relative Positional Encoding for Transformers with Linear Complexity☆64Updated 3 years ago
- Implementation of Nyström Self-attention, from the paper Nyströmformer☆139Updated 5 months ago
- Graph neural networks☆19Updated 5 years ago
- PyTorch implementation of FNet: Mixing Tokens with Fourier transforms☆27Updated 4 years ago
- To be a next-generation DL-based phenotype prediction from genome mutations.☆19Updated 4 years ago
- Implements MLP-Mixer (https://arxiv.org/abs/2105.01601) with the CIFAR-10 dataset.☆57Updated 3 years ago
- A TensorFlow implementation of the paper 'Set Transformer: A Framework for Attention-based Permutation-Invariant Neural Networks'☆31Updated last year
- Official PyTorch repository for Quaternion Generative Adversarial Networks.☆20Updated 3 years ago
- Implementation of the Kalman Filtering Attention proposed in "Kalman Filtering Attention for User Behavior Modeling in CTR Prediction"☆58Updated last year