OpenNLPLab / cosFormerLinks
[ICLR 2022] Official implementation of cosformer-attention in cosFormer: Rethinking Softmax in Attention
☆193Updated 2 years ago
Alternatives and similar repositories for cosFormer
Users that are interested in cosFormer are comparing it to the libraries listed below
Sorting:
- Implementation of the Transformer variant proposed in "Transformer Quality in Linear Time"☆363Updated last year
- Official PyTorch Implementation of Long-Short Transformer (NeurIPS 2021).☆225Updated 3 years ago
- [ICML 2021 Oral] We show pure attention suffers rank collapse, and how different mechanisms combat it.☆164Updated 4 years ago
- Implementation of Linformer for Pytorch☆286Updated last year
- Code release for "LogME: Practical Assessment of Pre-trained Models for Transfer Learning" (ICML 2021) and Ranking and Tuning Pre-trained…☆209Updated last year
- An implementation of local windowed attention for language modeling☆450Updated 4 months ago
- Official code for our CVPR'22 paper “Vision Transformer Slimming: Multi-Dimension Searching in Continuous Optimization Space”☆249Updated last year
- Recent Advances in MLP-based Models (MLP is all you need!)☆115Updated 2 years ago
- [ACL 2022] Structured Pruning Learns Compact and Accurate Models https://arxiv.org/abs/2204.00408☆195Updated 2 years ago
- Transformer based on a variant of attention that is linear complexity in respect to sequence length☆768Updated last year
- ☆191Updated last year
- Implementation of fused cosine similarity attention in the same style as Flash Attention☆214Updated 2 years ago
- Implementation of Long-Short Transformer, combining local and global inductive biases for attention over long sequences, in Pytorch☆119Updated 3 years ago
- Implementation of Memformer, a Memory-augmented Transformer, in Pytorch☆117Updated 4 years ago
- Reproducing the Linear Multihead Attention introduced in Linformer paper (Linformer: Self-Attention with Linear Complexity)☆76Updated 4 years ago
- code for Explicit Sparse Transformer☆62Updated last year
- This repository is an implementation for the loss function proposed in https://arxiv.org/pdf/2110.06848.pdf.☆115Updated 3 years ago
- [EVA ICLR'23; LARA ICML'22] Efficient attention mechanisms via control variates, random features, and importance sampling☆85Updated 2 years ago
- Transformers w/o Attention, based fully on MLPs☆93Updated last year
- [EMNLP 2022] Official implementation of Transnormer in our EMNLP 2022 paper - The Devil in Linear Transformer☆60Updated last year
- ☆640Updated 2 weeks ago
- Implementation of Uniformer, a simple attention and 3d convolutional net that achieved SOTA in a number of video classification tasks, de…☆101Updated 3 years ago
- Fully featured implementation of Routing Transformer☆292Updated 3 years ago
- [ICML 2020] code for "PowerNorm: Rethinking Batch Normalization in Transformers" https://arxiv.org/abs/2003.07845☆120Updated 3 years ago
- ☆246Updated 3 years ago
- [ICLR 2024]EMO: Earth Mover Distance Optimization for Auto-Regressive Language Modeling(https://arxiv.org/abs/2310.04691)☆123Updated last year
- [ICLR 2021 top 3%] Is Attention Better Than Matrix Decomposition?☆332Updated 2 years ago
- Transformer are RNNs: Fast Autoregressive Transformer with Linear Attention☆22Updated 4 years ago
- An implementation of the efficient attention module.☆315Updated 4 years ago
- Sinkhorn Transformer - Practical implementation of Sparse Sinkhorn Attention☆264Updated 3 years ago