deep-spin / mcan-vqa-continuous-attentionLinks
☆23Updated 5 years ago
Alternatives and similar repositories for mcan-vqa-continuous-attention
Users that are interested in mcan-vqa-continuous-attention are comparing it to the libraries listed below
Sorting:
- A variant of Transformer-XL where the memory is updated not with a queue, but with attention☆49Updated 5 years ago
- Implementation of Long-Short Transformer, combining local and global inductive biases for attention over long sequences, in Pytorch☆120Updated 4 years ago
- A PyTorch implementation of the paper - "Synthesizer: Rethinking Self-Attention in Transformer Models"☆73Updated 3 years ago
- Official code repository of the paper Learning Associative Inference Using Fast Weight Memory by Schlag et al.☆28Updated 4 years ago
- ☆53Updated 4 years ago
- Code for the EMNLP 2021 Oral paper "Are Gender-Neutral Queries Really Gender-Neutral? Mitigating Gender Bias in Image Search" https://arx…☆12Updated 2 years ago
- PyTorch code for "Perceiver-VL: Efficient Vision-and-Language Modeling with Iterative Latent Attention" (WACV 2023)☆33Updated 2 years ago
- Implementation of Memformer, a Memory-augmented Transformer, in Pytorch☆126Updated 5 years ago
- Implementation of Multistream Transformers in Pytorch