lucidrains / axial-attentionLinks
Implementation of Axial attention - attending to multi-dimensional data efficiently
☆397Updated 4 years ago
Alternatives and similar repositories for axial-attention
Users that are interested in axial-attention are comparing it to the libraries listed below
Sorting:
- An implementation of the efficient attention module.☆327Updated 5 years ago
- An All-MLP solution for Vision, from Google AI☆1,054Updated 6 months ago
- Implementation of ConvMixer for "Patches Are All You Need? 🤷"☆1,078Updated 3 years ago
- Unofficial implementation of MLP-Mixer: An all-MLP Architecture for Vision☆217Updated 4 years ago
- This is a PyTorch re-implementation of Axial-DeepLab (ECCV 2020 Spotlight)☆460Updated 4 years ago
- Implementation of Linformer for Pytorch☆303Updated 2 years ago
- Implementation of 1D, 2D, and 3D FFT convolutions in PyTorch. Much faster than direct convolutions for large kernel sizes.☆516Updated 2 years ago
- Implementation of Transformer in Transformer, pixel level attention paired with patch level attention for image classification, in Pytorc…☆310Updated 4 years ago
- A PyTorch implementation of the 1d and 2d Sinusoidal positional encoding/embedding.☆260Updated 5 years ago
- Implementation of Pixel-level Contrastive Learning, proposed in the paper "Propagate Yourself", in Pytorch☆264Updated 4 years ago
- Self-supervised vIsion Transformer (SiT)☆337Updated 3 years ago
- Tiny PyTorch library for maintaining a moving average of a collection of parameters.☆445Updated last year
- Learning Rate Warmup in PyTorch☆415Updated 6 months ago
- Implementation of a U-net complete with efficient attention as well as the latest research findings☆291Updated last year
- An implementation of 1D, 2D, and 3D positional encoding in Pytorch and TensorFlow☆614Updated last year
- This is an official implementation of CvT: Introducing Convolutions to Vision Transformers.☆590Updated 2 years ago
- Neighborhood Attention Transformer, arxiv 2022 / CVPR 2023. Dilated Neighborhood Attention Transformer, arxiv 2022☆1,165Updated last year
- Transformer based on a variant of attention that is linear complexity in respect to sequence length☆822Updated last year
- [NeurIPS 2021] [T-PAMI] Global Filter Networks for Image Classification☆495Updated 2 years ago
- PyTorch Implementation of CvT: Introducing Convolutions to Vision Transformers☆229Updated 4 years ago
- This is an official implementation for "Self-Supervised Learning with Swin Transformers".☆667Updated 4 years ago
- A PyTorch implementation of "CoAtNet: Marrying Convolution and Attention for All Data Sizes"☆390Updated 4 years ago
- Code for the Convolutional Vision Transformer (ConViT)☆470Updated 4 years ago
- [NeurIPS 2021 Spotlight] Official code for "Focal Self-attention for Local-Global Interactions in Vision Transformers"☆557Updated 3 years ago
- Escaping the Big Data Paradigm with Compact Transformers, 2021 (Train your Vision Transformers in 30 mins on CIFAR-10 with a single GPU!)☆539Updated last year
- An implementation of local windowed attention for language modeling☆492Updated 5 months ago
- Two simple and effective designs of vision transformer, which is on par with the Swin transformer☆607Updated 2 years ago
- ☆469Updated 2 years ago
- A simple way to keep track of an Exponential Moving Average (EMA) version of your Pytorch model☆632Updated 3 weeks ago
- [ICLR'22 Oral] Implementation of "CycleMLP: A MLP-like Architecture for Dense Prediction"☆291Updated 3 years ago