microsoft / Focal-Transformer
[NeurIPS 2021 Spotlight] Official code for "Focal Self-attention for Local-Global Interactions in Vision Transformers"
☆549Updated 2 years ago
Alternatives and similar repositories for Focal-Transformer:
Users that are interested in Focal-Transformer are comparing it to the libraries listed below
- This is an official implementation for "Self-Supervised Learning with Swin Transformers".☆643Updated 3 years ago
- Pytorch implementation of "All Tokens Matter: Token Labeling for Training Better Vision Transformers"☆427Updated last year
- CSWin Transformer: A General Vision Transformer Backbone with Cross-Shaped, CVPR 2022☆558Updated last year
- Official PyTorch implementation of Fully Attentional Networks☆476Updated last year
- [ICLR'22 Oral] Implementation of "CycleMLP: A MLP-like Architecture for Dense Prediction"☆284Updated 2 years ago
- Two simple and effective designs of vision transformer, which is on par with the Swin transformer☆598Updated 2 years ago
- Code for the Convolutional Vision Transformer (ConViT)☆467Updated 3 years ago
- (ICCV 2021 Oral) CoaT: Co-Scale Conv-Attentional Image Transformers☆231Updated 3 years ago
- This is an official implementation for "ResT: An Efficient Transformer for Visual Recognition".☆282Updated 2 years ago
- This is an official implementation of CvT: Introducing Convolutions to Vision Transformers.☆569Updated last year
- LeViT a Vision Transformer in ConvNet's Clothing for Faster Inference☆608Updated 2 years ago
- [ECCV 2022]Code for paper "DaViT: Dual Attention Vision Transformer"☆345Updated last year
- This is an official implementation for "SimMIM: A Simple Framework for Masked Image Modeling".☆955Updated 2 years ago
- MLP-Like Vision Permutator for Visual Recognition (PyTorch)☆191Updated 2 years ago
- ☆245Updated 2 years ago
- ConvMAE: Masked Convolution Meets Masked Autoencoders☆493Updated last year
- PyTorch Implementation of CvT: Introducing Convolutions to Vision Transformers☆227Updated 3 years ago
- Implementation of the 😇 Attention layer from the paper, Scaling Local Self-Attention For Parameter Efficient Visual Backbones☆198Updated 3 years ago
- ☆191Updated 2 years ago
- Implementation of Transformer in Transformer, pixel level attention paired with patch level attention for image classification, in Pytorc…☆305Updated 3 years ago
- Implementation of Bottleneck Transformer in Pytorch☆676Updated 3 years ago
- [NeurIPS 2021] [T-PAMI] DynamicViT: Efficient Vision Transformers with Dynamic Token Sparsification☆595Updated last year
- Dense Contrastive Learning (DenseCL) for self-supervised representation learning, CVPR 2021 Oral.☆556Updated last year
- This is a PyTorch re-implementation of Axial-DeepLab (ECCV 2020 Spotlight)☆451Updated 3 years ago
- [NeurIPS 2022] Official code for "Focal Modulation Networks"☆720Updated last year
- This is an official implementation of CvT: Introducing Convolutions to Vision Transformers.☆227Updated 2 years ago
- EsViT: Efficient self-supervised Vision Transformers☆411Updated last year
- Bottleneck Transformers for Visual Recognition☆276Updated 3 years ago
- [TPAMI 2022 & CVPR2021 Oral] UP-DETR: Unsupervised Pre-training for Object Detection with Transformers☆480Updated last year
- ICCV2021, Tokens-to-Token ViT: Training Vision Transformers from Scratch on ImageNet☆1,179Updated last year