OpenNLPLab / cosFormerLinks
[ICLR 2022] Official implementation of cosformer-attention in cosFormer: Rethinking Softmax in Attention
☆196Updated 2 years ago
Alternatives and similar repositories for cosFormer
Users that are interested in cosFormer are comparing it to the libraries listed below
Sorting:
- Official PyTorch Implementation of Long-Short Transformer (NeurIPS 2021).☆228Updated 3 years ago
- Implementation of the Transformer variant proposed in "Transformer Quality in Linear Time"☆369Updated 2 years ago
- [ICML 2021 Oral] We show pure attention suffers rank collapse, and how different mechanisms combat it.☆166Updated 4 years ago
- Implementation of Linformer for Pytorch☆300Updated last year
- Code release for "LogME: Practical Assessment of Pre-trained Models for Transfer Learning" (ICML 2021) and Ranking and Tuning Pre-trained…☆210Updated 2 years ago
- Learning to Encode Position for Transformer with Continuous Dynamical Model☆59Updated 5 years ago
- Implementing SYNTHESIZER: Rethinking Self-Attention in Transformer Models using Pytorch☆70Updated 5 years ago
- Sinkhorn Transformer - Practical implementation of Sparse Sinkhorn Attention☆268Updated 4 years ago
- AdaShare: Learning What To Share For Efficient Deep Multi-Task Learning☆114Updated 4 years ago
- Reproducing the Linear Multihead Attention introduced in Linformer paper (Linformer: Self-Attention with Linear Complexity)☆75Updated 5 years ago
- Implement the paper "Self-Attention with Relative Position Representations"☆139Updated 4 years ago
- Unofficial PyTorch implementation of the paper "cosFormer: Rethinking Softmax In Attention".☆44Updated 3 years ago
- code for Explicit Sparse Transformer☆61Updated 2 years ago
- An implementation of the efficient attention module.☆321Updated 4 years ago
- An implementation of local windowed attention for language modeling☆483Updated 3 months ago
- Recent Advances in MLP-based Models (MLP is all you need!)☆117Updated 2 years ago
- Implementation of Long-Short Transformer, combining local and global inductive biases for attention over long sequences, in Pytorch☆120Updated 4 years ago
- A pytorch &keras implementation and demo of Fastformer.☆190Updated 3 years ago
- [EVA ICLR'23; LARA ICML'22] Efficient attention mechanisms via control variates, random features, and importance sampling☆87Updated 2 years ago
- Fully featured implementation of Routing Transformer☆296Updated 3 years ago
- Implementation of Memformer, a Memory-augmented Transformer, in Pytorch☆123Updated 4 years ago
- Sequence modeling with Mega.☆300Updated 2 years ago
- [ACL 2022] Structured Pruning Learns Compact and Accurate Models https://arxiv.org/abs/2204.00408☆197Updated 2 years ago
- [ICML 2020] code for "PowerNorm: Rethinking Batch Normalization in Transformers" https://arxiv.org/abs/2003.07845☆120Updated 4 years ago
- FlatNCE: A Novel Contrastive Representation Learning Objective☆90Updated 3 years ago
- Code repository of the paper "Modelling Long Range Dependencies in ND: From Task-Specific to a General Purpose CNN" https://arxiv.org/abs…☆183Updated 5 months ago
- The accompanying code for "Memory-efficient Transformers via Top-k Attention" (Ankit Gupta, Guy Dar, Shaya Goodman, David Ciprut, Jonatha…☆69Updated 4 years ago
- The pure and clear PyTorch Distributed Training Framework.☆274Updated last year
- Apollo: An Adaptive Parameter-wise Diagonal Quasi-Newton Method for Nonconvex Stochastic Optimization☆182Updated 3 years ago
- Loss and accuracy go opposite ways...right?☆95Updated 5 years ago