OpenNLPLab / cosFormer
[ICLR 2022] Official implementation of cosformer-attention in cosFormer: Rethinking Softmax in Attention
☆180Updated last year
Related projects ⓘ
Alternatives and complementary repositories for cosFormer
- Implementation of the Transformer variant proposed in "Transformer Quality in Linear Time"☆350Updated last year
- Official PyTorch Implementation of Long-Short Transformer (NeurIPS 2021).☆222Updated 2 years ago
- Implementation of Linformer for Pytorch☆257Updated 10 months ago
- About Code release for "Flowformer: Linearizing Transformers with Conservation Flows" (ICML 2022), https://arxiv.org/pdf/2202.06258.pdf☆305Updated 4 months ago
- [EMNLP 2022] Official implementation of Transnormer in our EMNLP 2022 paper - The Devil in Linear Transformer☆55Updated last year
- An implementation of local windowed attention for language modeling☆386Updated 2 months ago
- An implementation of the efficient attention module.☆285Updated 3 years ago
- Recent Advances in MLP-based Models (MLP is all you need!)☆112Updated last year
- Reproducing the Linear Multihead Attention introduced in Linformer paper (Linformer: Self-Attention with Linear Complexity)☆73Updated 4 years ago
- Learning to Encode Position for Transformer with Continuous Dynamical Model☆59Updated 4 years ago
- Implementation of Long-Short Transformer, combining local and global inductive biases for attention over long sequences, in Pytorch☆116Updated 3 years ago
- Code release for "LogME: Practical Assessment of Pre-trained Models for Transfer Learning" (ICML 2021) and Ranking and Tuning Pre-trained…☆203Updated last year
- Implement the paper "Self-Attention with Relative Position Representations"☆124Updated 3 years ago
- Implementing SYNTHESIZER: Rethinking Self-Attention in Transformer Models using Pytorch☆70Updated 4 years ago
- Official code for our CVPR'22 paper “Vision Transformer Slimming: Multi-Dimension Searching in Continuous Optimization Space”☆246Updated last year
- Unofficial PyTorch implementation of the paper "cosFormer: Rethinking Softmax In Attention".☆43Updated 3 years ago
- [ICLR 2021 top 3%] Is Attention Better Than Matrix Decomposition?☆325Updated 2 years ago
- [ICLR'22 Oral] Implementation of "CycleMLP: A MLP-like Architecture for Dense Prediction"☆281Updated 2 years ago
- [EVA ICLR'23; LARA ICML'22] Efficient attention mechanisms via control variates, random features, and importance sampling☆79Updated last year
- [ICML 2021 Oral] We show pure attention suffers rank collapse, and how different mechanisms combat it.☆162Updated 3 years ago
- code for Explicit Sparse Transformer☆56Updated last year
- ☆189Updated last year
- Fully featured implementation of Routing Transformer☆284Updated 3 years ago
- [ICLR 2023] Official implementation of Transnormer in our ICLR 2023 paper - Toeplitz Neural Network for Sequence Modeling☆74Updated 7 months ago
- Sinkhorn Transformer - Practical implementation of Sparse Sinkhorn Attention☆254Updated 3 years ago
- [NeurIPS 2022 Spotlight] This is the official PyTorch implementation of "EcoFormer: Energy-Saving Attention with Linear Complexity"☆66Updated 2 years ago
- [ICLR 2024]EMO: Earth Mover Distance Optimization for Auto-Regressive Language Modeling(https://arxiv.org/abs/2310.04691)☆115Updated 8 months ago
- Code repository of the paper "Modelling Long Range Dependencies in ND: From Task-Specific to a General Purpose CNN" https://arxiv.org/abs…☆181Updated last year
- PyTorch codes for "LST: Ladder Side-Tuning for Parameter and Memory Efficient Transfer Learning"☆232Updated last year
- Implementation of Memformer, a Memory-augmented Transformer, in Pytorch☆106Updated 4 years ago