cmsflash / efficient-attentionLinks
An implementation of the efficient attention module.
☆322Updated 4 years ago
Alternatives and similar repositories for efficient-attention
Users that are interested in efficient-attention are comparing it to the libraries listed below
Sorting:
- ☆193Updated 2 years ago
- Unofficial implementation of MLP-Mixer: An all-MLP Architecture for Vision☆217Updated 4 years ago
- Implementation of Axial attention - attending to multi-dimensional data efficiently☆388Updated 4 years ago
- [ICLR'22 Oral] Implementation of "CycleMLP: A MLP-like Architecture for Dense Prediction"☆291Updated 3 years ago
- Implementation of Transformer in Transformer, pixel level attention paired with patch level attention for image classification, in Pytorc…☆309Updated 3 years ago
- [ICLR 2021 top 3%] Is Attention Better Than Matrix Decomposition?☆338Updated 3 years ago
- MLP-Like Vision Permutator for Visual Recognition (PyTorch)☆192Updated 3 years ago
- Implementation of the 😇 Attention layer from the paper, Scaling Local Self-Attention For Parameter Efficient Visual Backbones☆200Updated 4 years ago
- Implementation of Linformer for Pytorch☆301Updated last year
- ☆249Updated 3 years ago
- Unofficial implementation of Google's FNet: Mixing Tokens with Fourier Transforms☆259Updated 4 years ago
- [NeurIPS 2021] [T-PAMI] Global Filter Networks for Image Classification☆489Updated 2 years ago
- Code repository of the paper "Modelling Long Range Dependencies in ND: From Task-Specific to a General Purpose CNN" https://arxiv.org/abs…☆183Updated 6 months ago
- PyTorch reimplementation of the paper "MaxViT: Multi-Axis Vision Transformer" [ECCV 2022].☆163Updated 2 years ago
- A PyTorch implementation of the 1d and 2d Sinusoidal positional encoding/embedding.☆260Updated 4 years ago
- ☆203Updated last year
- iFormer: Inception Transformer☆247Updated 2 years ago
- A better PyTorch implementation of image local attention which reduces the GPU memory by an order of magnitude.☆140Updated 3 years ago
- [ICLR 2022] Official implementation of cosformer-attention in cosFormer: Rethinking Softmax in Attention☆196Updated 2 years ago
- Unofficial Implementation of MLP-Mixer, gMLP, resMLP, Vision Permutator, S2MLP, S2MLPv2, RaftMLP, HireMLP, ConvMLP, AS-MLP, SparseMLP, Co…☆169Updated 3 years ago
- [NeurIPS 2021 Spotlight] Official code for "Focal Self-attention for Local-Global Interactions in Vision Transformers"☆558Updated 3 years ago
- Implementation of Deformable Attention in Pytorch from the paper "Vision Transformer with Deformable Attention"☆364Updated 9 months ago
- Implementation of Uniformer, a simple attention and 3d convolutional net that achieved SOTA in a number of video classification tasks, de…☆102Updated 3 years ago
- PyTorch Implementation of CvT: Introducing Convolutions to Vision Transformers☆228Updated 4 years ago
- Official code for paper "On the Connection between Local Attention and Dynamic Depth-wise Convolution" ICLR 2022 Spotlight☆186Updated 2 years ago
- Implementation of ConvMixer for "Patches Are All You Need? 🤷"☆1,077Updated 3 years ago
- Recent Advances in MLP-based Models (MLP is all you need!)☆117Updated 2 years ago
- [ECCV 2022]Code for paper "DaViT: Dual Attention Vision Transformer"☆369Updated last year
- [CVPR2022 - Oral] Official Jax Implementation of Learned Queries for Efficient Local Attention☆118Updated 3 years ago
- [ICLR 2023] "More ConvNets in the 2020s: Scaling up Kernels Beyond 51x51 using Sparsity"; [ICML 2023] "Are Large Kernels Better Teachers…☆281Updated 2 years ago