cmsflash / efficient-attention
An implementation of the efficient attention module.
☆310Updated 4 years ago
Alternatives and similar repositories for efficient-attention
Users that are interested in efficient-attention are comparing it to the libraries listed below
Sorting:
- ☆190Updated 2 years ago
- Unofficial implementation of MLP-Mixer: An all-MLP Architecture for Vision☆217Updated 4 years ago
- Implementation of Transformer in Transformer, pixel level attention paired with patch level attention for image classification, in Pytorc…☆304Updated 3 years ago
- [ICLR'22 Oral] Implementation of "CycleMLP: A MLP-like Architecture for Dense Prediction"☆287Updated 3 years ago
- Implementation of Axial attention - attending to multi-dimensional data efficiently☆377Updated 3 years ago
- Implementation of Deformable Attention in Pytorch from the paper "Vision Transformer with Deformable Attention"☆338Updated 3 months ago
- [NeurIPS 2021] [T-PAMI] Global Filter Networks for Image Classification☆473Updated last year
- ☆245Updated 3 years ago
- Unofficial implementation of Google's FNet: Mixing Tokens with Fourier Transforms☆260Updated 3 years ago
- Implementation of Pixel-level Contrastive Learning, proposed in the paper "Propagate Yourself", in Pytorch☆258Updated 4 years ago
- MLP-Like Vision Permutator for Visual Recognition (PyTorch)☆190Updated 3 years ago
- Implementation of Linformer for Pytorch☆283Updated last year
- Dense Contrastive Learning (DenseCL) for self-supervised representation learning, CVPR 2021 Oral.☆558Updated last year
- ConvMAE: Masked Convolution Meets Masked Autoencoders☆504Updated 2 years ago
- Implementation of the 😇 Attention layer from the paper, Scaling Local Self-Attention For Parameter Efficient Visual Backbones☆198Updated 4 years ago
- [NeurIPS 2021 Spotlight] Official code for "Focal Self-attention for Local-Global Interactions in Vision Transformers"☆555Updated 3 years ago
- Code repository of the paper "Modelling Long Range Dependencies in ND: From Task-Specific to a General Purpose CNN" https://arxiv.org/abs…☆184Updated this week
- Official code for paper "On the Connection between Local Attention and Dynamic Depth-wise Convolution" ICLR 2022 Spotlight☆184Updated 2 years ago
- [ECCV 2022]Code for paper "DaViT: Dual Attention Vision Transformer"☆354Updated last year
- MetaFormer Baselines for Vision (TPAMI 2024)☆459Updated 11 months ago
- ☆199Updated 9 months ago
- [ICLR 2022] Official implementation of cosformer-attention in cosFormer: Rethinking Softmax in Attention☆190Updated 2 years ago
- Implementation of ConvMixer for "Patches Are All You Need? 🤷"☆1,071Updated 2 years ago
- iFormer: Inception Transformer☆247Updated 2 years ago
- [TPAMI 2023, NeurIPS 2020] Code release for "Deep Multimodal Fusion by Channel Exchanging"☆307Updated 10 months ago
- [NeurIPS 2021] [T-PAMI] DynamicViT: Efficient Vision Transformers with Dynamic Token Sparsification☆603Updated last year
- Two simple and effective designs of vision transformer, which is on par with the Swin transformer☆601Updated 2 years ago
- A better PyTorch implementation of image local attention which reduces the GPU memory by an order of magnitude.☆141Updated 3 years ago
- CSWin Transformer: A General Vision Transformer Backbone with Cross-Shaped, CVPR 2022☆566Updated last year
- Implementation of CrossViT: Cross-Attention Multi-Scale Vision Transformer for Image Classification☆199Updated 4 years ago