jaketae / g-mlp
PyTorch implementation of Pay Attention to MLPs
☆39Updated 3 years ago
Related projects ⓘ
Alternatives and complementary repositories for g-mlp
- Reproducing the Linear Multihead Attention introduced in Linformer paper (Linformer: Self-Attention with Linear Complexity)☆71Updated 4 years ago
- For paper《Gaussian Transformer: A Lightweight Approach for Natural Language Inference》☆27Updated 4 years ago
- Implementing SYNTHESIZER: Rethinking Self-Attention in Transformer Models using Pytorch☆70Updated 4 years ago
- code for Explicit Sparse Transformer☆57Updated last year
- A PyTorch implementation of the paper - "Synthesizer: Rethinking Self-Attention in Transformer Models"☆71Updated last year
- Code for "Understanding and Improving Layer Normalization"☆46Updated 4 years ago
- Sparse Attention with Linear Units☆17Updated 3 years ago
- ☆32Updated 3 years ago
- Unofficial PyTorch implementation of the paper "cosFormer: Rethinking Softmax In Attention".☆43Updated 3 years ago
- Code for the paper PermuteFormer☆42Updated 3 years ago
- Learning to Encode Position for Transformer with Continuous Dynamical Model☆59Updated 4 years ago
- A variant of Transformer-XL where the memory is updated not with a queue, but with attention☆45Updated 4 years ago
- (ACL-IJCNLP 2021) Convolutions and Self-Attention: Re-interpreting Relative Positions in Pre-trained Language Models.☆21Updated 2 years ago
- Mask Attention Networks: Rethinking and Strengthen Transformer in NAACL2021☆15Updated 3 years ago
- custom pytorch implementation of MoCo v3☆44Updated 3 years ago
- [EMNLP'19] Summary for Transformer Understanding☆52Updated 4 years ago
- Implementation of Memformer, a Memory-augmented Transformer, in Pytorch☆106Updated 3 years ago
- Pytorch implementation of Performer from the paper "Rethinking Attention with Performers".☆23Updated 4 years ago
- Implementation of OmniNet, Omnidirectional Representations from Transformers, in Pytorch☆55Updated 3 years ago
- A pytorch realization of adafactor (https://arxiv.org/pdf/1804.04235.pdf )☆24Updated 5 years ago
- Implementation of Mogrifier LSTM in PyTorch☆35Updated 4 years ago
- Variational Transformers for Diverse Response Generation☆82Updated 3 months ago
- Implementation of Long-Short Transformer, combining local and global inductive biases for attention over long sequences, in Pytorch☆116Updated 3 years ago
- Implementation for our WACV 2021 paper "Multi-Loss Weighting with Coefficient of Variations"☆50Updated 3 years ago
- a pytorch implementation of self-attention with relative position representations☆51Updated 3 years ago
- FlatNCE: A Novel Contrastive Representation Learning Objective☆86Updated 3 years ago
- Code for the ACL2020 paper Character-Level Translation with Self-Attention☆32Updated 4 years ago
- Skyformer: Remodel Self-Attention with Gaussian Kernel and Nystr\"om Method (NeurIPS 2021)☆59Updated 2 years ago
- This repository contains the code for the paper in Findings of EMNLP 2021: "EfficientBERT: Progressively Searching Multilayer Perceptron …☆32Updated last year
- Mixture of Attention Heads☆38Updated 2 years ago