cmsflash / efficient-attentionLinks
An implementation of the efficient attention module.
☆320Updated 4 years ago
Alternatives and similar repositories for efficient-attention
Users that are interested in efficient-attention are comparing it to the libraries listed below
Sorting:
- ☆192Updated 2 years ago
- Implementation of Transformer in Transformer, pixel level attention paired with patch level attention for image classification, in Pytorc…☆307Updated 3 years ago
- [ICLR'22 Oral] Implementation of "CycleMLP: A MLP-like Architecture for Dense Prediction"☆290Updated 3 years ago
- Implementation of Axial attention - attending to multi-dimensional data efficiently☆385Updated 4 years ago
- Unofficial implementation of MLP-Mixer: An all-MLP Architecture for Vision☆218Updated 4 years ago
- [ICLR 2021 top 3%] Is Attention Better Than Matrix Decomposition?☆338Updated 2 years ago
- MLP-Like Vision Permutator for Visual Recognition (PyTorch)☆192Updated 3 years ago
- ☆248Updated 3 years ago
- Implementation of Linformer for Pytorch☆298Updated last year
- Unofficial implementation of Google's FNet: Mixing Tokens with Fourier Transforms☆259Updated 4 years ago
- Implementation of the 😇 Attention layer from the paper, Scaling Local Self-Attention For Parameter Efficient Visual Backbones☆199Updated 4 years ago
- Code repository of the paper "Modelling Long Range Dependencies in ND: From Task-Specific to a General Purpose CNN" https://arxiv.org/abs…☆183Updated 4 months ago
- [NeurIPS 2021] [T-PAMI] Global Filter Networks for Image Classification☆486Updated 2 years ago
- PyTorch reimplementation of the paper "MaxViT: Multi-Axis Vision Transformer" [ECCV 2022].☆163Updated 2 years ago
- Recent Advances in MLP-based Models (MLP is all you need!)☆116Updated 2 years ago
- Reproducing the Linear Multihead Attention introduced in Linformer paper (Linformer: Self-Attention with Linear Complexity)☆75Updated 5 years ago
- A better PyTorch implementation of image local attention which reduces the GPU memory by an order of magnitude.☆141Updated 3 years ago
- Official PyTorch Implementation of Long-Short Transformer (NeurIPS 2021).☆226Updated 3 years ago
- A PyTorch implementation of the 1d and 2d Sinusoidal positional encoding/embedding.☆257Updated 4 years ago
- [NeurIPS 2021 Spotlight] Official code for "Focal Self-attention for Local-Global Interactions in Vision Transformers"☆558Updated 3 years ago
- [ICLR 2022] Official implementation of cosformer-attention in cosFormer: Rethinking Softmax in Attention☆196Updated 2 years ago
- iFormer: Inception Transformer☆246Updated 2 years ago
- ☆199Updated last year
- PyTorch Implementation of CvT: Introducing Convolutions to Vision Transformers☆226Updated 4 years ago
- Implementation of Deformable Attention in Pytorch from the paper "Vision Transformer with Deformable Attention"☆356Updated 7 months ago
- Learning Rate Warmup in PyTorch☆411Updated 2 months ago
- Attention mechanism☆53Updated 4 years ago
- Implementation of 1D, 2D, and 3D FFT convolutions in PyTorch. Much faster than direct convolutions for large kernel sizes.☆504Updated last year
- Official code for paper "On the Connection between Local Attention and Dynamic Depth-wise Convolution" ICLR 2022 Spotlight☆186Updated 2 years ago
- Accelerating T2t-ViT by 1.6-3.6x.☆252Updated 3 years ago