gdewael / bio-attentionLinks
Simple implementations of attention modules adapted for the biological data domain.
☆13Updated 4 months ago
Alternatives and similar repositories for bio-attention
Users that are interested in bio-attention are comparing it to the libraries listed below
Sorting:
- Axial Positional Embedding for Pytorch☆83Updated 7 months ago
- Implementation of Tranception, an attention network, paired with retrieval, that is SOTA for protein fitness prediction☆32Updated 3 years ago
- Implementation of H-Transformer-1D, Hierarchical Attention for Sequence Learning☆165Updated last year
- rho_VAE: an autoregressive parametrization of the VAE encoder☆16Updated 6 years ago
- Pytorch reimplementation of Molecule Attention Transformer, which uses a transformer to tackle the graph-like structure of molecules☆58Updated 4 years ago
- Implementation of Denoising Diffusion for protein design, but using the new Equiformer (successor to SE3 Transformers) with some addition…☆57Updated 2 years ago
- Usable implementation of Mogrifier, a circuit for enhancing LSTMs and potentially other networks, from Deepmind☆20Updated last year
- Replication attempt for the Protein Folding Model described in https://www.biorxiv.org/content/10.1101/2021.08.02.454840v1☆37Updated 3 years ago
- BioMedBERT: A Pre-trained Biomedical Language Model for QA and IR☆31Updated 4 years ago
- Implementation of the Triangle Multiplicative module, used in Alphafold2 as an efficient way to mix rows or columns of a 2d feature map, …☆36Updated 4 years ago
- ☆14Updated 3 years ago
- To be a next-generation DL-based phenotype prediction from genome mutations.☆19Updated 4 years ago
- Decorators for maximizing memory utilization with PyTorch & CUDA☆17Updated last week
- Unofficial PyTorch implementation of Fastformer based on paper "Fastformer: Additive Attention Can Be All You Need"."☆133Updated 4 years ago
- Relative Positional Encoding for Transformers with Linear Complexity☆65Updated 3 years ago
- Coherent Deconfounding Autoencoder (CODE-AE) can extract both common biological signals shared by incoherent samples and private represen…☆18Updated last year
- A simple Transformer where the softmax has been replaced with normalization☆20Updated 5 years ago
- PyTorch implementation of FNet: Mixing Tokens with Fourier transforms☆28Updated 4 years ago
- Tensorflow implementation of a linear attention architecture☆44Updated 4 years ago
- Implementation of Nyström Self-attention, from the paper Nyströmformer☆141Updated 6 months ago
- Implementation of the algorithm detailed in paper "Evolutionary design of molecules based on deep learning and a genetic algorithm"☆24Updated last year
- This repository contains code for reproducing results in our paper Interpreting Potts and Transformer Protein Models Through the Lens of …☆58Updated 3 years ago
- Contrastive neighbor embeddings☆55Updated last month
- My own attempt at a long context genomics model, leveraging recent advances in long context attention modeling (Flash Attention + other h…☆54Updated 2 years ago
- Implements MLP-Mixer (https://arxiv.org/abs/2105.01601) with the CIFAR-10 dataset.☆57Updated 3 years ago
- Implementation of Insertion-deletion Denoising Diffusion Probabilistic Models☆30Updated 3 years ago
- ResiDual: Transformer with Dual Residual Connections, https://arxiv.org/abs/2304.14802☆95Updated 2 years ago
- Unofficial PyTorch implementation of Google's FNet: Mixing Tokens with Fourier Transforms. With checkpoints.☆77Updated 3 years ago
- Standalone Product Key Memory module in Pytorch - for augmenting Transformer models☆83Updated last year
- Implementation of Geometric Vector Perceptron, a simple circuit for 3d rotation equivariance for learning over large biomolecules, in Pyt…☆75Updated 4 years ago