microsoft / Focal-Transformer
[NeurIPS 2021 Spotlight] Official code for "Focal Self-attention for Local-Global Interactions in Vision Transformers"
☆545Updated 2 years ago
Related projects ⓘ
Alternatives and complementary repositories for Focal-Transformer
- Code for the Convolutional Vision Transformer (ConViT)☆462Updated 3 years ago
- This is an official implementation for "Self-Supervised Learning with Swin Transformers".☆627Updated 3 years ago
- CSWin Transformer: A General Vision Transformer Backbone with Cross-Shaped, CVPR 2022☆547Updated last year
- This is an official implementation for "SimMIM: A Simple Framework for Masked Image Modeling".☆927Updated 2 years ago
- This is an official implementation for "ResT: An Efficient Transformer for Visual Recognition".☆281Updated 2 years ago
- Pytorch implementation of "All Tokens Matter: Token Labeling for Training Better Vision Transformers"☆425Updated last year
- [ICLR'22 Oral] Implementation of "CycleMLP: A MLP-like Architecture for Dense Prediction"☆281Updated 2 years ago
- Official PyTorch implementation of Fully Attentional Networks☆467Updated last year
- This is a PyTorch re-implementation of Axial-DeepLab (ECCV 2020 Spotlight)☆450Updated 3 years ago
- (ICCV 2021 Oral) CoaT: Co-Scale Conv-Attentional Image Transformers☆228Updated 2 years ago
- Two simple and effective designs of vision transformer, which is on par with the Swin transformer☆580Updated last year
- LeViT a Vision Transformer in ConvNet's Clothing for Faster Inference☆602Updated 2 years ago
- This is an official implementation of CvT: Introducing Convolutions to Vision Transformers.☆555Updated last year
- [ECCV 2022]Code for paper "DaViT: Dual Attention Vision Transformer"☆330Updated 9 months ago
- EsViT: Efficient self-supervised Vision Transformers☆408Updated last year
- Dense Contrastive Learning (DenseCL) for self-supervised representation learning, CVPR 2021 Oral.☆547Updated 10 months ago
- ☆241Updated 2 years ago
- Implementation of the 😇 Attention layer from the paper, Scaling Local Self-Attention For Parameter Efficient Visual Backbones☆199Updated 3 years ago
- ConvMAE: Masked Convolution Meets Masked Autoencoders☆484Updated last year
- Implementation of Transformer in Transformer, pixel level attention paired with patch level attention for image classification, in Pytorc…☆300Updated 2 years ago
- Per-Pixel Classification is Not All You Need for Semantic Segmentation (NeurIPS 2021, spotlight)☆1,354Updated 2 years ago
- ☆189Updated last year
- [NeurIPS 2021] [T-PAMI] DynamicViT: Efficient Vision Transformers with Dynamic Token Sparsification☆572Updated last year
- Repository of Vision Transformer with Deformable Attention (CVPR2022) and DAT++: Spatially Dynamic Vision Transformerwith Deformable Atte…☆795Updated 7 months ago
- PyTorch Implementation of CvT: Introducing Convolutions to Vision Transformers☆224Updated 3 years ago
- RepMLPNet: Hierarchical Vision MLP with Re-parameterized Locality (CVPR 2022)☆303Updated last year
- PoolFormer: MetaFormer Is Actually What You Need for Vision (CVPR 2022 Oral)☆1,295Updated 5 months ago
- Propagate Yourself: Exploring Pixel-Level Consistency for Unsupervised Visual Representation Learning, CVPR 2021☆332Updated 3 years ago
- [NeurIPS 2022] Official code for "Focal Modulation Networks"☆700Updated last year
- Bottleneck Transformers for Visual Recognition☆274Updated 3 years ago