cmsflash / efficient-attention
An implementation of the efficient attention module.
☆305Updated 4 years ago
Alternatives and similar repositories for efficient-attention:
Users that are interested in efficient-attention are comparing it to the libraries listed below
- ☆191Updated 2 years ago
- Implementation of Transformer in Transformer, pixel level attention paired with patch level attention for image classification, in Pytorc…☆304Updated 3 years ago
- Implementation of Linformer for Pytorch☆274Updated last year
- [ICLR'22 Oral] Implementation of "CycleMLP: A MLP-like Architecture for Dense Prediction"☆285Updated 2 years ago
- [NeurIPS 2021] [T-PAMI] Global Filter Networks for Image Classification☆471Updated last year
- Unofficial implementation of MLP-Mixer: An all-MLP Architecture for Vision☆217Updated 3 years ago
- ☆245Updated 3 years ago
- [ICLR 2021 top 3%] Is Attention Better Than Matrix Decomposition?☆331Updated 2 years ago
- A PyTorch implementation of the 1d and 2d Sinusoidal positional encoding/embedding.☆252Updated 4 years ago
- Dense Contrastive Learning (DenseCL) for self-supervised representation learning, CVPR 2021 Oral.☆557Updated last year
- [ICLR 2022] Official implementation of cosformer-attention in cosFormer: Rethinking Softmax in Attention☆188Updated 2 years ago
- (ICLR 2022 Spotlight) Official PyTorch implementation of "How Do Vision Transformers Work?"☆812Updated 2 years ago
- Implementation of ConvMixer for "Patches Are All You Need? 🤷"☆1,070Updated 2 years ago
- Implementation of Pixel-level Contrastive Learning, proposed in the paper "Propagate Yourself", in Pytorch☆259Updated 4 years ago
- Implementation of the 😇 Attention layer from the paper, Scaling Local Self-Attention For Parameter Efficient Visual Backbones☆198Updated 4 years ago
- Official PyTorch Implementation of Long-Short Transformer (NeurIPS 2021).☆225Updated 2 years ago
- An implementation of local windowed attention for language modeling☆429Updated 2 months ago
- iFormer: Inception Transformer☆244Updated 2 years ago
- MLP-Like Vision Permutator for Visual Recognition (PyTorch)☆191Updated 2 years ago
- [NeurIPS 2021 Spotlight] Official code for "Focal Self-attention for Local-Global Interactions in Vision Transformers"☆551Updated 2 years ago
- Implementation of Axial attention - attending to multi-dimensional data efficiently☆373Updated 3 years ago
- Two simple and effective designs of vision transformer, which is on par with the Swin transformer☆598Updated 2 years ago
- An All-MLP solution for Vision, from Google AI☆1,015Updated 6 months ago
- Learning Rate Warmup in PyTorch☆404Updated 2 weeks ago
- Implementation of Deformable Attention in Pytorch from the paper "Vision Transformer with Deformable Attention"☆328Updated last month
- Transformer based on a variant of attention that is linear complexity in respect to sequence length☆751Updated 10 months ago
- This is an official implementation for "ResT: An Efficient Transformer for Visual Recognition".☆282Updated 2 years ago
- This is an official pytorch implementation of Fast Fourier Convolution.☆366Updated 10 months ago
- Unofficial implementation of Google's FNet: Mixing Tokens with Fourier Transforms☆259Updated 3 years ago
- PyTorch reimplementation of the paper "MaxViT: Multi-Axis Vision Transformer" [ECCV 2022].☆162Updated last year