kuixu / Linear-Multihead-Attention
Reproducing the Linear Multihead Attention introduced in Linformer paper (Linformer: Self-Attention with Linear Complexity)
☆76Updated 4 years ago
Alternatives and similar repositories for Linear-Multihead-Attention:
Users that are interested in Linear-Multihead-Attention are comparing it to the libraries listed below
- PyTorch implementation of Pay Attention to MLPs☆40Updated 3 years ago
- [ICLR 2022] Official implementation of cosformer-attention in cosFormer: Rethinking Softmax in Attention☆190Updated 2 years ago
- Warmup learning rate wrapper for Pytorch Scheduler☆41Updated 5 years ago
- code for Explicit Sparse Transformer☆60Updated last year
- Recent Advances in MLP-based Models (MLP is all you need!)☆115Updated 2 years ago
- Official PyTorch implementation of "Co-Mixup: Saliency Guided Joint Mixup with Supermodular Diversity" (ICLR'21 Oral)☆103Updated 3 years ago
- Implementing SYNTHESIZER: Rethinking Self-Attention in Transformer Models using Pytorch☆70Updated 4 years ago
- Attention mechanism☆53Updated 3 years ago
- custom pytorch implementation of MoCo v3☆45Updated 4 years ago
- Implementation of OmniNet, Omnidirectional Representations from Transformers, in Pytorch☆57Updated 4 years ago
- Official PyTorch Implementation of Long-Short Transformer (NeurIPS 2021).☆225Updated 3 years ago
- Locally Enhanced Self-Attention: Rethinking Self-Attention as Local and Context Terms☆20Updated 3 years ago
- Pytorch implementation of CVPR2021 paper: SuperMix: Supervising the Mixing Data Augmentation☆92Updated 3 years ago
- Implementation of Online Label Smoothing in PyTorch☆94Updated 2 years ago
- [ICLR'22 Oral] Implementation of "CycleMLP: A MLP-like Architecture for Dense Prediction"☆287Updated 3 years ago
- An open-source project for long-tail classification☆39Updated 3 years ago
- Pytorch implementation of the hamburger module from the ICLR 2021 paper "Is Attention Better Than Matrix Decomposition"☆98Updated 4 years ago
- Official Pytorch implementation of MixMo framework☆83Updated 3 years ago
- A Pytorch implementation of Global Self-Attention Network, a fully-attention backbone for vision tasks☆94Updated 4 years ago
- Transformer are RNNs: Fast Autoregressive Transformer with Linear Attention☆22Updated 4 years ago
- Implementations of Recent Papers in Computer Vision☆38Updated 2 years ago
- [CVPR 2021] Code release for "Unsupervised Feature Learning by Cross-Level Instance-Group Discrimination."☆101Updated 2 years ago
- Pytorch implementation for ICLR 2021 paper - MiCE: Mixture of Contrastive Experts for Unsupervised Image Clustering☆49Updated 4 years ago
- [AAAI 2021] Curriculum Labeling: Revisiting Pseudo-Labeling for Semi-Supervised Learning☆139Updated 4 years ago
- Transformers w/o Attention, based fully on MLPs☆93Updated last year
- CLIP: Connecting Text and Image (Learning Transferable Visual Models From Natural Language Supervision)☆79Updated 4 years ago
- An implementation of the efficient attention module.☆306Updated 4 years ago
- WeightNet: Revisiting the Design Space of Weight Networks☆19Updated 4 years ago
- ☆198Updated 9 months ago
- Implementation of Cross Transformer for spatially-aware few-shot transfer, in Pytorch☆52Updated 4 years ago