cmsflash / efficient-attentionLinks
An implementation of the efficient attention module.
☆313Updated 4 years ago
Alternatives and similar repositories for efficient-attention
Users that are interested in efficient-attention are comparing it to the libraries listed below
Sorting:
- [ICLR'22 Oral] Implementation of "CycleMLP: A MLP-like Architecture for Dense Prediction"☆288Updated 3 years ago
- Implementation of Axial attention - attending to multi-dimensional data efficiently☆380Updated 3 years ago
- ☆190Updated 2 years ago
- Implementation of Transformer in Transformer, pixel level attention paired with patch level attention for image classification, in Pytorc…☆305Updated 3 years ago
- ☆246Updated 3 years ago
- [NeurIPS 2021 Spotlight] Official code for "Focal Self-attention for Local-Global Interactions in Vision Transformers"☆556Updated 3 years ago
- Unofficial implementation of MLP-Mixer: An all-MLP Architecture for Vision☆217Updated 4 years ago
- PyTorch Implementation of CvT: Introducing Convolutions to Vision Transformers☆225Updated 4 years ago
- [NeurIPS 2021] [T-PAMI] Global Filter Networks for Image Classification☆477Updated last year
- Two simple and effective designs of vision transformer, which is on par with the Swin transformer☆601Updated 2 years ago
- ☆200Updated 10 months ago
- iFormer: Inception Transformer☆247Updated 2 years ago
- Implementation of Linformer for Pytorch☆285Updated last year
- Dense Contrastive Learning (DenseCL) for self-supervised representation learning, CVPR 2021 Oral.☆559Updated last year
- [NeurIPS 2022] HorNet: Efficient High-Order Spatial Interactions with Recursive Gated Convolutions☆336Updated last year
- An All-MLP solution for Vision, from Google AI☆1,022Updated 8 months ago
- [ICLR 2021 top 3%] Is Attention Better Than Matrix Decomposition?☆332Updated 2 years ago
- Code repository of the paper "Modelling Long Range Dependencies in ND: From Task-Specific to a General Purpose CNN" https://arxiv.org/abs…☆184Updated 3 weeks ago
- Implementation of the 😇 Attention layer from the paper, Scaling Local Self-Attention For Parameter Efficient Visual Backbones☆199Updated 4 years ago
- PyTorch reimplementation of the paper "MaxViT: Multi-Axis Vision Transformer" [ECCV 2022].☆162Updated last year
- [NeurIPS 2021] [T-PAMI] DynamicViT: Efficient Vision Transformers with Dynamic Token Sparsification☆608Updated last year
- ☆119Updated 3 years ago
- Implementation of ConvMixer for "Patches Are All You Need? 🤷"☆1,072Updated 2 years ago
- MLP-Like Vision Permutator for Visual Recognition (PyTorch)☆190Updated 3 years ago
- Implementation of Uniformer, a simple attention and 3d convolutional net that achieved SOTA in a number of video classification tasks, de…☆100Updated 3 years ago
- ☆216Updated 3 years ago
- Neighborhood Attention Transformer, arxiv 2022 / CVPR 2023. Dilated Neighborhood Attention Transformer, arxiv 2022☆1,117Updated last year
- This is an official implementation for "Self-Supervised Learning with Swin Transformers".☆656Updated 4 years ago
- [TPAMI 2023, NeurIPS 2020] Code release for "Deep Multimodal Fusion by Channel Exchanging"☆307Updated 10 months ago
- Learning Rate Warmup in PyTorch☆410Updated 2 months ago