OpenNLPLab / cosFormerLinks
[ICLR 2022] Official implementation of cosformer-attention in cosFormer: Rethinking Softmax in Attention
☆194Updated 2 years ago
Alternatives and similar repositories for cosFormer
Users that are interested in cosFormer are comparing it to the libraries listed below
Sorting:
- Implementation of the Transformer variant proposed in "Transformer Quality in Linear Time"☆366Updated last year
- Official PyTorch Implementation of Long-Short Transformer (NeurIPS 2021).☆225Updated 3 years ago
- [ICML 2021 Oral] We show pure attention suffers rank collapse, and how different mechanisms combat it.☆165Updated 4 years ago
- Implementation of Linformer for Pytorch☆290Updated last year
- Code release for "LogME: Practical Assessment of Pre-trained Models for Transfer Learning" (ICML 2021) and Ranking and Tuning Pre-trained…☆208Updated last year
- Reproducing the Linear Multihead Attention introduced in Linformer paper (Linformer: Self-Attention with Linear Complexity)☆76Updated 5 years ago
- Implementing SYNTHESIZER: Rethinking Self-Attention in Transformer Models using Pytorch☆70Updated 5 years ago
- Implementation of Long-Short Transformer, combining local and global inductive biases for attention over long sequences, in Pytorch☆119Updated 3 years ago
- Recent Advances in MLP-based Models (MLP is all you need!)☆115Updated 2 years ago
- AdaShare: Learning What To Share For Efficient Deep Multi-Task Learning☆112Updated 4 years ago
- An implementation of local windowed attention for language modeling☆460Updated 6 months ago
- Sinkhorn Transformer - Practical implementation of Sparse Sinkhorn Attention☆265Updated 3 years ago
- Implement the paper "Self-Attention with Relative Position Representations"☆135Updated 4 years ago
- [EMNLP 2022] Official implementation of Transnormer in our EMNLP 2022 paper - The Devil in Linear Transformer☆61Updated last year
- A pytorch &keras implementation and demo of Fastformer.☆189Updated 2 years ago
- Sequence modeling with Mega.☆296Updated 2 years ago
- The pure and clear PyTorch Distributed Training Framework.☆276Updated last year
- Unofficial implementation of Google's FNet: Mixing Tokens with Fourier Transforms☆259Updated 4 years ago
- An implementation of the efficient attention module.☆319Updated 4 years ago
- Learning to Encode Position for Transformer with Continuous Dynamical Model☆60Updated 4 years ago
- code for Explicit Sparse Transformer☆62Updated last year
- Simple tutorials on Pytorch DDP training☆281Updated 2 years ago
- [ICML 2020] code for "PowerNorm: Rethinking Batch Normalization in Transformers" https://arxiv.org/abs/2003.07845☆120Updated 4 years ago
- Official code for our CVPR'22 paper “Vision Transformer Slimming: Multi-Dimension Searching in Continuous Optimization Space”☆250Updated last year
- Root Mean Square Layer Normalization☆245Updated 2 years ago
- [ACL 2022] Structured Pruning Learns Compact and Accurate Models https://arxiv.org/abs/2204.00408☆196Updated 2 years ago
- [EVA ICLR'23; LARA ICML'22] Efficient attention mechanisms via control variates, random features, and importance sampling☆86Updated 2 years ago
- FlatNCE: A Novel Contrastive Representation Learning Objective☆90Updated 3 years ago
- PyTorch repository for ICLR 2022 paper (GSAM) which improves generalization (e.g. +3.8% top-1 accuracy on ImageNet with ViT-B/32)☆143Updated 2 years ago
- Loss and accuracy go opposite ways...right?☆94Updated 5 years ago