rishikksh20 / rectified-linear-attention
Sparse Attention with Linear Units
☆17Updated 4 years ago
Alternatives and similar repositories for rectified-linear-attention
Users that are interested in rectified-linear-attention are comparing it to the libraries listed below
Sorting:
- PyTorch implementation of Pay Attention to MLPs☆40Updated 3 years ago
- Locally Enhanced Self-Attention: Rethinking Self-Attention as Local and Context Terms☆20Updated 3 years ago
- Unofficial PyTorch implementation of the paper "cosFormer: Rethinking Softmax In Attention".☆44Updated 3 years ago
- For paper《Gaussian Transformer: A Lightweight Approach for Natural Language Inference》☆28Updated 5 years ago
- WeightNet: Revisiting the Design Space of Weight Networks☆19Updated 4 years ago
- LV-BERT: Exploiting Layer Variety for BERT (Findings of ACL 2021)☆18Updated 2 years ago
- ☆33Updated 4 years ago
- Official implementation for paper "Relational Surrogate Loss Learning", ICLR 2022☆37Updated 2 years ago
- Implementation of Mogrifier LSTM in PyTorch☆35Updated 5 years ago
- Learning to Encode Position for Transformer with Continuous Dynamical Model☆59Updated 4 years ago
- custom pytorch implementation of MoCo v3☆45Updated 4 years ago
- [ICCV 2021] Official implementation of "Scalable Vision Transformers with Hierarchical Pooling"☆33Updated 3 years ago
- (ACL-IJCNLP 2021) Convolutions and Self-Attention: Re-interpreting Relative Positions in Pre-trained Language Models.☆21Updated 2 years ago
- A pytorch realization of adafactor (https://arxiv.org/pdf/1804.04235.pdf )☆23Updated 5 years ago
- Reproducing the Linear Multihead Attention introduced in Linformer paper (Linformer: Self-Attention with Linear Complexity)☆76Updated 4 years ago
- Implementing SYNTHESIZER: Rethinking Self-Attention in Transformer Models using Pytorch☆70Updated 4 years ago
- Pytorch implementation of Performer from the paper "Rethinking Attention with Performers".☆25Updated 4 years ago
- This repository contains the code for the paper in Findings of EMNLP 2021: "EfficientBERT: Progressively Searching Multilayer Perceptron …☆32Updated last year
- CoaT: Co-Scale Conv-Attentional Image Transformers☆16Updated 4 years ago
- Lightweight Transformer for Multi-modal Tasks☆16Updated 2 years ago
- Code of our Neurips2020 paper "Auto Learning Attention", coming soon☆22Updated 4 years ago
- ☆32Updated 2 years ago
- 基于Transformer的单模型、多尺度的VAE模型☆55Updated 3 years ago
- Transformer are RNNs: Fast Autoregressive Transformer with Linear Attention☆22Updated 4 years ago
- code for Explicit Sparse Transformer☆62Updated last year
- Mixture of Attention Heads☆44Updated 2 years ago
- Code for "Understanding and Improving Layer Normalization"☆46Updated 5 years ago
- Multi-modal data augmentation for machine learning☆16Updated 5 years ago
- ☆16Updated 3 years ago
- Implementation of OmniNet, Omnidirectional Representations from Transformers, in Pytorch☆57Updated 4 years ago