wzlxjtu / PositionalEncoding2DLinks
A PyTorch implementation of the 1d and 2d Sinusoidal positional encoding/embedding.
☆253Updated 4 years ago
Alternatives and similar repositories for PositionalEncoding2D
Users that are interested in PositionalEncoding2D are comparing it to the libraries listed below
Sorting:
- An implementation of 1D, 2D, and 3D positional encoding in Pytorch and TensorFlow☆602Updated 9 months ago
- Unofficial implementation of MLP-Mixer: An all-MLP Architecture for Vision☆218Updated 4 years ago
- Learning Rate Warmup in PyTorch☆410Updated last month
- Tiny PyTorch library for maintaining a moving average of a collection of parameters.☆434Updated 10 months ago
- An All-MLP solution for Vision, from Google AI☆1,034Updated last month
- Implementation of Transformer in Transformer, pixel level attention paired with patch level attention for image classification, in Pytorc…☆305Updated 3 years ago
- Pytorch implementation of "An intriguing failing of convolutional neural networks and the CoordConv solution" - https://arxiv.org/abs/180…☆153Updated last year
- Implementation of ConvMixer for "Patches Are All You Need? 🤷"☆1,076Updated 2 years ago
- An implementation of the efficient attention module.☆320Updated 4 years ago
- Self-supervised vIsion Transformer (SiT)☆337Updated 2 years ago
- Implementation of Pixel-level Contrastive Learning, proposed in the paper "Propagate Yourself", in Pytorch☆259Updated 4 years ago
- PyTorch implementation of Bootstrap Your Own Latent: A New Approach to Self-Supervised Learning☆494Updated 3 years ago
- Implementation of Slot Attention from GoogleAI☆451Updated 11 months ago
- Implementing Stand-Alone Self-Attention in Vision Models using Pytorch☆455Updated 5 years ago
- PyTorch Implementation of CvT: Introducing Convolutions to Vision Transformers☆226Updated 4 years ago
- Implementation of the 😇 Attention layer from the paper, Scaling Local Self-Attention For Parameter Efficient Visual Backbones☆199Updated 4 years ago
- A better PyTorch implementation of image local attention which reduces the GPU memory by an order of magnitude.☆141Updated 3 years ago
- Implementation of Axial attention - attending to multi-dimensional data efficiently☆384Updated 3 years ago
- ☆248Updated 3 years ago
- Is the attention layer even necessary? (https://arxiv.org/abs/2105.02723)☆486Updated 4 years ago
- An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale☆297Updated 3 years ago
- EsViT: Efficient self-supervised Vision Transformers☆413Updated last year
- This is a PyTorch re-implementation of Axial-DeepLab (ECCV 2020 Spotlight)☆457Updated 4 years ago
- Implementation of Perceiver, General Perception with Iterative Attention, in Pytorch☆1,163Updated last year
- Code for the Convolutional Vision Transformer (ConViT)☆466Updated 3 years ago
- Compute CNN receptive field size in pytorch in one line☆361Updated last year
- A Pytorch-Lightning implementation of self-supervised algorithms☆542Updated 3 years ago
- This is an official implementation for "Self-Supervised Learning with Swin Transformers".☆658Updated 4 years ago
- MultiMAE: Multi-modal Multi-task Masked Autoencoders, ECCV 2022☆590Updated 2 years ago
- (ICLR 2022 Spotlight) Official PyTorch implementation of "How Do Vision Transformers Work?"☆819Updated 3 years ago