jaketae / g-mlpLinks
PyTorch implementation of Pay Attention to MLPs
☆40Updated 3 years ago
Alternatives and similar repositories for g-mlp
Users that are interested in g-mlp are comparing it to the libraries listed below
Sorting:
- Reproducing the Linear Multihead Attention introduced in Linformer paper (Linformer: Self-Attention with Linear Complexity)☆76Updated 4 years ago
- code for Explicit Sparse Transformer☆62Updated last year
- A PyTorch implementation of the paper - "Synthesizer: Rethinking Self-Attention in Transformer Models"☆73Updated 2 years ago
- Code for "Understanding and Improving Layer Normalization"☆46Updated 5 years ago
- Implementation of Long-Short Transformer, combining local and global inductive biases for attention over long sequences, in Pytorch☆119Updated 3 years ago
- Implementing SYNTHESIZER: Rethinking Self-Attention in Transformer Models using Pytorch☆70Updated 5 years ago
- Sparse Attention with Linear Units☆17Updated 4 years ago
- custom pytorch implementation of MoCo v3☆45Updated 4 years ago
- A variant of Transformer-XL where the memory is updated not with a queue, but with attention☆49Updated 4 years ago
- For paper《Gaussian Transformer: A Lightweight Approach for Natural Language Inference》☆28Updated 5 years ago
- ☆33Updated 4 years ago
- Unofficial PyTorch implementation of the paper "cosFormer: Rethinking Softmax In Attention".☆44Updated 3 years ago
- Implementation of OmniNet, Omnidirectional Representations from Transformers, in Pytorch☆58Updated 4 years ago
- Recent Advances in MLP-based Models (MLP is all you need!)☆115Updated 2 years ago
- (ACL-IJCNLP 2021) Convolutions and Self-Attention: Re-interpreting Relative Positions in Pre-trained Language Models.☆21Updated 2 years ago
- Implementation of RealFormer using pytorch☆100Updated 4 years ago
- Implementation of Online Label Smoothing in PyTorch☆94Updated 2 years ago
- Pytorch implementation of Performer from the paper "Rethinking Attention with Performers".☆25Updated 4 years ago
- My implementation of the gMLP model from the paper "Pay Attention to MLPs".☆25Updated 4 years ago
- FlatNCE: A Novel Contrastive Representation Learning Objective☆90Updated 3 years ago
- Mask Attention Networks: Rethinking and Strengthen Transformer in NAACL2021☆14Updated 4 years ago
- ☆36Updated 4 years ago
- Axial Positional Embedding for Pytorch☆81Updated 3 months ago
- This repository contains the code for the paper in Findings of EMNLP 2021: "EfficientBERT: Progressively Searching Multilayer Perceptron …☆32Updated last year
- Official implementation of Auxiliary Learning by Implicit Differentiation [ICLR 2021]☆84Updated 10 months ago
- Implementation of Cross Transformer for spatially-aware few-shot transfer, in Pytorch☆53Updated 4 years ago
- Official PyTorch Implementation of Long-Short Transformer (NeurIPS 2021).☆225Updated 3 years ago
- Skyformer: Remodel Self-Attention with Gaussian Kernel and Nystr\"om Method (NeurIPS 2021)☆61Updated 3 years ago
- Implementation of Fast Transformer in Pytorch☆174Updated 3 years ago
- Implementation of Memory-Compressed Attention, from the paper "Generating Wikipedia By Summarizing Long Sequences"☆70Updated 2 years ago