wzlxjtu / PositionalEncoding2D
A PyTorch implementation of the 1d and 2d Sinusoidal positional encoding/embedding.
☆252Updated 4 years ago
Alternatives and similar repositories for PositionalEncoding2D:
Users that are interested in PositionalEncoding2D are comparing it to the libraries listed below
- An implementation of 1D, 2D, and 3D positional encoding in Pytorch and TensorFlow☆575Updated 5 months ago
- Learning Rate Warmup in PyTorch☆404Updated 2 weeks ago
- Unofficial implementation of MLP-Mixer: An all-MLP Architecture for Vision☆217Updated 3 years ago
- Implementation of Transformer in Transformer, pixel level attention paired with patch level attention for image classification, in Pytorc…☆304Updated 3 years ago
- Tiny PyTorch library for maintaining a moving average of a collection of parameters.☆426Updated 5 months ago
- An All-MLP solution for Vision, from Google AI☆1,015Updated 6 months ago
- Implementation of Axial attention - attending to multi-dimensional data efficiently☆373Updated 3 years ago
- An implementation of the efficient attention module.☆305Updated 4 years ago
- Sinkhorn Transformer - Practical implementation of Sparse Sinkhorn Attention☆260Updated 3 years ago
- Dense Contrastive Learning (DenseCL) for self-supervised representation learning, CVPR 2021 Oral.☆557Updated last year
- A pytorch port of google-research/google-research/robust_loss/☆677Updated 3 years ago
- Implementation of Pixel-level Contrastive Learning, proposed in the paper "Propagate Yourself", in Pytorch☆259Updated 4 years ago
- Implementation of ConvMixer for "Patches Are All You Need? 🤷"☆1,070Updated 2 years ago
- Transformer based on a variant of attention that is linear complexity in respect to sequence length☆751Updated 10 months ago
- Self-supervised vIsion Transformer (SiT)☆327Updated 2 years ago
- Implementation of Slot Attention from GoogleAI☆417Updated 7 months ago
- Fully featured implementation of Routing Transformer☆289Updated 3 years ago
- Unofficial implementation of Google's FNet: Mixing Tokens with Fourier Transforms☆259Updated 3 years ago
- A better PyTorch implementation of image local attention which reduces the GPU memory by an order of magnitude.☆138Updated 3 years ago
- Implementation of the 😇 Attention layer from the paper, Scaling Local Self-Attention For Parameter Efficient Visual Backbones☆198Updated 4 years ago
- Official PyTorch Repo for "ReZero is All You Need: Fast Convergence at Large Depth"☆407Updated 8 months ago
- PyTorch implementation of Bootstrap Your Own Latent: A New Approach to Self-Supervised Learning☆490Updated 2 years ago
- Minimalist implementation of VQ-VAE in Pytorch☆536Updated 3 years ago
- An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale☆293Updated 3 years ago
- My take on a practical implementation of Linformer for Pytorch.☆413Updated 2 years ago
- Pytorch implementation of "An intriguing failing of convolutional neural networks and the CoordConv solution" - https://arxiv.org/abs/180…☆152Updated last year
- ☆449Updated last year
- Is the attention layer even necessary? (https://arxiv.org/abs/2105.02723)☆483Updated 3 years ago
- Code repository of the paper "Modelling Long Range Dependencies in ND: From Task-Specific to a General Purpose CNN" https://arxiv.org/abs…☆183Updated 2 years ago
- ☆245Updated 3 years ago