cmsflash / efficient-attentionLinks
An implementation of the efficient attention module.
☆320Updated 4 years ago
Alternatives and similar repositories for efficient-attention
Users that are interested in efficient-attention are comparing it to the libraries listed below
Sorting:
- [ICLR'22 Oral] Implementation of "CycleMLP: A MLP-like Architecture for Dense Prediction"☆289Updated 3 years ago
- ☆192Updated 2 years ago
- Implementation of Transformer in Transformer, pixel level attention paired with patch level attention for image classification, in Pytorc…☆305Updated 3 years ago
- Implementation of Axial attention - attending to multi-dimensional data efficiently☆384Updated 3 years ago
- Unofficial implementation of MLP-Mixer: An all-MLP Architecture for Vision☆218Updated 4 years ago
- [ICLR 2021 top 3%] Is Attention Better Than Matrix Decomposition?☆334Updated 2 years ago
- Implementation of the 😇 Attention layer from the paper, Scaling Local Self-Attention For Parameter Efficient Visual Backbones☆199Updated 4 years ago
- ☆248Updated 3 years ago
- [NeurIPS 2021] [T-PAMI] Global Filter Networks for Image Classification☆484Updated 2 years ago
- MLP-Like Vision Permutator for Visual Recognition (PyTorch)☆191Updated 3 years ago
- PyTorch reimplementation of the paper "MaxViT: Multi-Axis Vision Transformer" [ECCV 2022].☆163Updated 2 years ago
- Unofficial implementation of Google's FNet: Mixing Tokens with Fourier Transforms☆259Updated 4 years ago
- Code repository of the paper "Modelling Long Range Dependencies in ND: From Task-Specific to a General Purpose CNN" https://arxiv.org/abs…☆184Updated 2 months ago
- A PyTorch implementation of the 1d and 2d Sinusoidal positional encoding/embedding.☆253Updated 4 years ago
- Implementation of Linformer for Pytorch☆294Updated last year
- Recent Advances in MLP-based Models (MLP is all you need!)