cmsflash / efficient-attentionLinks
An implementation of the efficient attention module.
☆317Updated 4 years ago
Alternatives and similar repositories for efficient-attention
Users that are interested in efficient-attention are comparing it to the libraries listed below
Sorting:
- ☆190Updated 2 years ago
- [ICLR'22 Oral] Implementation of "CycleMLP: A MLP-like Architecture for Dense Prediction"☆288Updated 3 years ago
- Implementation of Transformer in Transformer, pixel level attention paired with patch level attention for image classification, in Pytorc…☆305Updated 3 years ago
- Unofficial implementation of MLP-Mixer: An all-MLP Architecture for Vision☆217Updated 4 years ago
- Transformer based on a variant of attention that is linear complexity in respect to sequence length☆775Updated last year
- Implementation of Axial attention - attending to multi-dimensional data efficiently☆384Updated 3 years ago
- [ICLR 2021 top 3%] Is Attention Better Than Matrix Decomposition?☆332Updated 2 years ago
- Implementation of Pixel-level Contrastive Learning, proposed in the paper "Propagate Yourself", in Pytorch☆259Updated 4 years ago
- Implementation of Linformer for Pytorch☆288Updated last year
- [NeurIPS 2021] [T-PAMI] Global Filter Networks for Image Classification☆479Updated 2 years ago
- Unofficial implementation of Google's FNet: Mixing Tokens with Fourier Transforms☆260Updated 4 years ago
- Dense Contrastive Learning (DenseCL) for self-supervised representation learning, CVPR 2021 Oral.☆560Updated last year
- ☆247Updated 3 years ago
- Implementation of ConvMixer for "Patches Are All You Need? 🤷"☆1,074Updated 2 years ago
- An implementation of local windowed attention for language modeling☆454Updated 5 months ago
- [NeurIPS 2021 Spotlight] Official code for "Focal Self-attention for Local-Global Interactions in Vision Transformers"☆556Updated 3 years ago
- Implementation of Deformable Attention in Pytorch from the paper "Vision Transformer with Deformable Attention"☆344Updated 4 months ago
- Implementation of the 😇 Attention layer from the paper, Scaling Local Self-Attention For Parameter Efficient Visual Backbones☆199Updated 4 years ago
- Official PyTorch Implementation of Long-Short Transformer (NeurIPS 2021).☆225Updated 3 years ago
- This is an official implementation for "ResT: An Efficient Transformer for Visual Recognition".☆285Updated 2 years ago
- iFormer: Inception Transformer☆247Updated 2 years ago
- Unofficial Implementation of MLP-Mixer, gMLP, resMLP, Vision Permutator, S2MLP, S2MLPv2, RaftMLP, HireMLP, ConvMLP, AS-MLP, SparseMLP, Co…☆169Updated 2 years ago
- ☆200Updated 10 months ago
- Neighborhood Attention Transformer, arxiv 2022 / CVPR 2023. Dilated Neighborhood Attention Transformer, arxiv 2022☆1,122Updated last year
- Implementation of CrossViT: Cross-Attention Multi-Scale Vision Transformer for Image Classification☆201Updated 4 years ago
- MLP-Like Vision Permutator for Visual Recognition (PyTorch)☆191Updated 3 years ago
- Official code for paper "On the Connection between Local Attention and Dynamic Depth-wise Convolution" ICLR 2022 Spotlight☆184Updated 2 years ago
- [NeurIPS 2021] [T-PAMI] DynamicViT: Efficient Vision Transformers with Dynamic Token Sparsification☆610Updated last year
- Fully featured implementation of Routing Transformer☆295Updated 3 years ago
- MetaFormer Baselines for Vision (TPAMI 2024)☆472Updated last year