cmsflash / efficient-attention
An implementation of the efficient attention module.
☆306Updated 4 years ago
Alternatives and similar repositories for efficient-attention:
Users that are interested in efficient-attention are comparing it to the libraries listed below
- ☆190Updated 2 years ago
- [NeurIPS 2021] [T-PAMI] Global Filter Networks for Image Classification☆474Updated last year
- [ICLR'22 Oral] Implementation of "CycleMLP: A MLP-like Architecture for Dense Prediction"☆287Updated 2 years ago
- ☆245Updated 3 years ago
- Implementation of Transformer in Transformer, pixel level attention paired with patch level attention for image classification, in Pytorc…☆304Updated 3 years ago
- Dense Contrastive Learning (DenseCL) for self-supervised representation learning, CVPR 2021 Oral.☆558Updated last year
- Unofficial implementation of MLP-Mixer: An all-MLP Architecture for Vision☆218Updated 3 years ago
- Implementation of ConvMixer for "Patches Are All You Need? 🤷"☆1,070Updated 2 years ago
- [NeurIPS 2021 Spotlight] Official code for "Focal Self-attention for Local-Global Interactions in Vision Transformers"☆554Updated 3 years ago
- Code repository of the paper "Modelling Long Range Dependencies in ND: From Task-Specific to a General Purpose CNN" https://arxiv.org/abs…☆183Updated 2 years ago
- [NeurIPS 2021] [T-PAMI] DynamicViT: Efficient Vision Transformers with Dynamic Token Sparsification☆600Updated last year
- Implementation of the 😇 Attention layer from the paper, Scaling Local Self-Attention For Parameter Efficient Visual Backbones☆197Updated 4 years ago
- This is an official implementation for "Self-Supervised Learning with Swin Transformers".☆653Updated 3 years ago
- Implementation of Pixel-level Contrastive Learning, proposed in the paper "Propagate Yourself", in Pytorch☆258Updated 4 years ago
- (ICLR 2022 Spotlight) Official PyTorch implementation of "How Do Vision Transformers Work?"☆813Updated 2 years ago
- MLP-Like Vision Permutator for Visual Recognition (PyTorch)☆190Updated 3 years ago
- Recent Advances in MLP-based Models (MLP is all you need!)☆115Updated 2 years ago
- Two simple and effective designs of vision transformer, which is on par with the Swin transformer☆599Updated 2 years ago
- ConvMAE: Masked Convolution Meets Masked Autoencoders☆504Updated 2 years ago
- Accelerating T2t-ViT by 1.6-3.6x.☆251Updated 3 years ago
- Escaping the Big Data Paradigm with Compact Transformers, 2021 (Train your Vision Transformers in 30 mins on CIFAR-10 with a single GPU!)☆522Updated 5 months ago
- This is an official implementation of CvT: Introducing Convolutions to Vision Transformers.☆227Updated 2 years ago
- Code for the Convolutional Vision Transformer (ConViT)☆466Updated 3 years ago
- Official PyTorch Implementation of Long-Short Transformer (NeurIPS 2021).☆225Updated 3 years ago
- ☆245Updated 3 years ago
- Implementation of Linformer for Pytorch☆279Updated last year
- Implementation of Axial attention - attending to multi-dimensional data efficiently☆377Updated 3 years ago
- EsViT: Efficient self-supervised Vision Transformers☆410Updated last year
- This is an official implementation for "ResT: An Efficient Transformer for Visual Recognition".☆282Updated 2 years ago
- PoolFormer: MetaFormer Is Actually What You Need for Vision (CVPR 2022 Oral)☆1,324Updated 10 months ago