wzlxjtu / PositionalEncoding2DLinks
A PyTorch implementation of the 1d and 2d Sinusoidal positional encoding/embedding.
☆252Updated 4 years ago
Alternatives and similar repositories for PositionalEncoding2D
Users that are interested in PositionalEncoding2D are comparing it to the libraries listed below
Sorting:
- An implementation of 1D, 2D, and 3D positional encoding in Pytorch and TensorFlow☆590Updated 8 months ago
- An implementation of the efficient attention module.☆317Updated 4 years ago
- Learning Rate Warmup in PyTorch☆410Updated last week
- Unofficial implementation of MLP-Mixer: An all-MLP Architecture for Vision☆217Updated 4 years ago
- Tiny PyTorch library for maintaining a moving average of a collection of parameters.☆431Updated 8 months ago
- Implementation of Transformer in Transformer, pixel level attention paired with patch level attention for image classification, in Pytorc…☆305Updated 3 years ago
- Implementation of Perceiver, General Perception with Iterative Attention, in Pytorch☆1,153Updated last year
- Implementation of Slot Attention from GoogleAI☆440Updated 10 months ago
- (ICLR 2022 Spotlight) Official PyTorch implementation of "How Do Vision Transformers Work?"☆817Updated 2 years ago
- Sinkhorn Transformer - Practical implementation of Sparse Sinkhorn Attention☆264Updated 3 years ago
- Implementation of ConvMixer for "Patches Are All You Need? 🤷"☆1,074Updated 2 years ago
- An implementation of Performer, a linear attention-based transformer, in Pytorch☆1,134Updated 3 years ago
- An All-MLP solution for Vision, from Google AI☆1,025Updated 9 months ago
- Implementing Stand-Alone Self-Attention in Vision Models using Pytorch☆455Updated 5 years ago
- Dense Contrastive Learning (DenseCL) for self-supervised representation learning, CVPR 2021 Oral.☆560Updated last year
- Implementation of Pixel-level Contrastive Learning, proposed in the paper "Propagate Yourself", in Pytorch☆259Updated 4 years ago
- [NeurIPS 2021 Spotlight] Official code for "Focal Self-attention for Local-Global Interactions in Vision Transformers"☆556Updated 3 years ago
- ☆247Updated 3 years ago
- Implementation of Axial attention - attending to multi-dimensional data efficiently☆384Updated 3 years ago
- ☆462Updated 2 years ago
- PyTorch Implementation of CvT: Introducing Convolutions to Vision Transformers☆226Updated 4 years ago
- This is a PyTorch re-implementation of Axial-DeepLab (ECCV 2020 Spotlight)☆456Updated 4 years ago
- A Pytorch-Lightning implementation of self-supervised algorithms☆539Updated 3 years ago
- Code for the Convolutional Vision Transformer (ConViT)☆465Updated 3 years ago
- Is the attention layer even necessary? (https://arxiv.org/abs/2105.02723)☆486Updated 4 years ago
- PyTorch implementation of Bootstrap Your Own Latent: A New Approach to Self-Supervised Learning☆492Updated 3 years ago
- Implementation of the 😇 Attention layer from the paper, Scaling Local Self-Attention For Parameter Efficient Visual Backbones☆199Updated 4 years ago
- Fully featured implementation of Routing Transformer☆295Updated 3 years ago
- Transformer based on a variant of attention that is linear complexity in respect to sequence length☆777Updated last year
- A pytorch port of google-research/google-research/robust_loss/☆684Updated 3 years ago