jaketae / g-mlp
PyTorch implementation of Pay Attention to MLPs
☆40Updated 3 years ago
Alternatives and similar repositories for g-mlp:
Users that are interested in g-mlp are comparing it to the libraries listed below
- Reproducing the Linear Multihead Attention introduced in Linformer paper (Linformer: Self-Attention with Linear Complexity)☆73Updated 4 years ago
- Code for "Understanding and Improving Layer Normalization"☆46Updated 5 years ago
- code for Explicit Sparse Transformer☆60Updated last year
- Implementing SYNTHESIZER: Rethinking Self-Attention in Transformer Models using Pytorch☆70Updated 4 years ago
- A PyTorch implementation of the paper - "Synthesizer: Rethinking Self-Attention in Transformer Models"☆73Updated 2 years ago
- Sparse Attention with Linear Units☆17Updated 3 years ago
- For paper《Gaussian Transformer: A Lightweight Approach for Natural Language Inference》☆28Updated 5 years ago
- Unofficial PyTorch implementation of the paper "cosFormer: Rethinking Softmax In Attention".☆44Updated 3 years ago
- Implementation of RealFormer using pytorch☆101Updated 4 years ago
- custom pytorch implementation of MoCo v3☆45Updated 3 years ago
- Recent Advances in MLP-based Models (MLP is all you need!)☆114Updated 2 years ago
- This repository contains the code for the paper in Findings of EMNLP 2021: "EfficientBERT: Progressively Searching Multilayer Perceptron …☆32Updated last year
- Implementation for our WACV 2021 paper "Multi-Loss Weighting with Coefficient of Variations"☆50Updated 4 years ago
- Implementation of Mogrifier LSTM in PyTorch☆35Updated 4 years ago
- ☆33Updated 3 years ago
- Mask Attention Networks: Rethinking and Strengthen Transformer in NAACL2021☆14Updated 3 years ago
- Implementation of Memformer, a Memory-augmented Transformer, in Pytorch☆112Updated 4 years ago
- A variant of Transformer-XL where the memory is updated not with a queue, but with attention☆48Updated 4 years ago
- Implementation of Long-Short Transformer, combining local and global inductive biases for attention over long sequences, in Pytorch☆118Updated 3 years ago
- Code for the paper PermuteFormer☆42Updated 3 years ago
- Pytorch implementation of Performer from the paper "Rethinking Attention with Performers".☆25Updated 4 years ago
- Implementation of OmniNet, Omnidirectional Representations from Transformers, in Pytorch☆57Updated 3 years ago
- (ACL-IJCNLP 2021) Convolutions and Self-Attention: Re-interpreting Relative Positions in Pre-trained Language Models.☆21Updated 2 years ago
- ☆36Updated 4 years ago
- Learning to Encode Position for Transformer with Continuous Dynamical Model☆59Updated 4 years ago
- Relative Positional Encoding for Transformers with Linear Complexity☆62Updated 2 years ago
- [EMNLP'19] Summary for Transformer Understanding☆53Updated 5 years ago
- Transformer are RNNs: Fast Autoregressive Transformer with Linear Attention☆20Updated 4 years ago
- Implementations of Recent Papers in Computer Vision☆39Updated 2 years ago
- ☆22Updated 3 years ago