microsoft / Focal-TransformerLinks
[NeurIPS 2021 Spotlight] Official code for "Focal Self-attention for Local-Global Interactions in Vision Transformers"
☆558Updated 3 years ago
Alternatives and similar repositories for Focal-Transformer
Users that are interested in Focal-Transformer are comparing it to the libraries listed below
Sorting:
- Code for the Convolutional Vision Transformer (ConViT)☆468Updated 3 years ago
- This is an official implementation for "Self-Supervised Learning with Swin Transformers".☆658Updated 4 years ago
- Pytorch implementation of "All Tokens Matter: Token Labeling for Training Better Vision Transformers"☆433Updated 2 years ago
- (ICCV 2021 Oral) CoaT: Co-Scale Conv-Attentional Image Transformers☆232Updated 3 years ago
- [ICLR'22 Oral] Implementation of "CycleMLP: A MLP-like Architecture for Dense Prediction"☆290Updated 3 years ago
- Official PyTorch implementation of Fully Attentional Networks☆479Updated 2 years ago
- CSWin Transformer: A General Vision Transformer Backbone with Cross-Shaped, CVPR 2022☆581Updated last year
- Two simple and effective designs of vision transformer, which is on par with the Swin transformer☆605Updated 2 years ago
- This is an official implementation of CvT: Introducing Convolutions to Vision Transformers.☆582Updated 2 years ago
- Implementation of Transformer in Transformer, pixel level attention paired with patch level attention for image classification, in Pytorc…☆307Updated 3 years ago
- This is an official implementation for "ResT: An Efficient Transformer for Visual Recognition".