rishikksh20 / rectified-linear-attention
Sparse Attention with Linear Units
☆17Updated 3 years ago
Alternatives and similar repositories for rectified-linear-attention:
Users that are interested in rectified-linear-attention are comparing it to the libraries listed below
- PyTorch implementation of Pay Attention to MLPs☆40Updated 3 years ago
- CoaT: Co-Scale Conv-Attentional Image Transformers☆16Updated 3 years ago
- Locally Enhanced Self-Attention: Rethinking Self-Attention as Local and Context Terms☆20Updated 3 years ago
- Implementation of Mogrifier LSTM in PyTorch☆35Updated 4 years ago
- Implementing SYNTHESIZER: Rethinking Self-Attention in Transformer Models using Pytorch☆70Updated 4 years ago
- Learning to Encode Position for Transformer with Continuous Dynamical Model☆59Updated 4 years ago
- Unofficial PyTorch implementation of the paper "cosFormer: Rethinking Softmax In Attention".☆44Updated 3 years ago
- Pytorch implementation of Performer from the paper "Rethinking Attention with Performers".☆24Updated 4 years ago
- WeightNet: Revisiting the Design Space of Weight Networks☆19Updated 4 years ago
- (ACL-IJCNLP 2021) Convolutions and Self-Attention: Re-interpreting Relative Positions in Pre-trained Language Models.☆21Updated 2 years ago
- This repository contains the code for the paper in Findings of EMNLP 2021: "EfficientBERT: Progressively Searching Multilayer Perceptron …☆32Updated last year
- code for Explicit Sparse Transformer☆60Updated last year
- Official implementation for paper "Relational Surrogate Loss Learning", ICLR 2022☆36Updated 2 years ago
- For paper《Gaussian Transformer: A Lightweight Approach for Natural Language Inference》☆28Updated 4 years ago
- LV-BERT: Exploiting Layer Variety for BERT (Findings of ACL 2021)☆18Updated last year
- A small framework mimics PyTorch using CuPy or NumPy☆27Updated 3 years ago
- custom pytorch implementation of MoCo v3☆45Updated 3 years ago
- Reproducing the Linear Multihead Attention introduced in Linformer paper (Linformer: Self-Attention with Linear Complexity)☆73Updated 4 years ago
- Warmup learning rate wrapper for Pytorch Scheduler☆40Updated 4 years ago
- Mask Attention Networks: Rethinking and Strengthen Transformer in NAACL2021☆14Updated 3 years ago
- Code of our Neurips2020 paper "Auto Learning Attention", coming soon☆21Updated 3 years ago
- Mixture of Attention Heads☆41Updated 2 years ago
- Implementation of OmniNet, Omnidirectional Representations from Transformers, in Pytorch☆57Updated 3 years ago
- ☆16Updated 3 years ago
- A pytorch realization of adafactor (https://arxiv.org/pdf/1804.04235.pdf )☆23Updated 5 years ago
- ☆33Updated 3 years ago
- Multi-modal data augmentation for machine learning☆16Updated 5 years ago
- ☆12Updated 4 years ago
- ☆57Updated 3 years ago
- Code for "Understanding and Improving Layer Normalization"☆46Updated 5 years ago