xxxnell / how-do-vits-work
(ICLR 2022 Spotlight) Official PyTorch implementation of "How Do Vision Transformers Work?"
☆806Updated 2 years ago
Related projects ⓘ
Alternatives and complementary repositories for how-do-vits-work
- This is an official implementation for "SimMIM: A Simple Framework for Masked Image Modeling".☆925Updated 2 years ago
- Implementation of ConvMixer for "Patches Are All You Need? 🤷"☆1,062Updated last year
- Neighborhood Attention Transformer, arxiv 2022 / CVPR 2023. Dilated Neighborhood Attention Transformer, arxiv 2022☆1,052Updated 5 months ago
- PyTorch implementation of MoCo v3 https//arxiv.org/abs/2104.02057☆1,214Updated 2 years ago
- Escaping the Big Data Paradigm with Compact Transformers, 2021 (Train your Vision Transformers in 30 mins on CIFAR-10 with a single GPU!)☆499Updated this week
- EsViT: Efficient self-supervised Vision Transformers☆408Updated last year
- Masked Siamese Networks for Label-Efficient Learning (https://arxiv.org/abs/2204.07141)☆449Updated 2 years ago
- A collection of literature after or concurrent with Masked Autoencoder (MAE) (Kaiming He el al.).☆778Updated 3 months ago
- This is an official implementation for "Self-Supervised Learning with Swin Transformers".☆624Updated 3 years ago
- Code for the Convolutional Vision Transformer (ConViT)☆463Updated 3 years ago
- PoolFormer: MetaFormer Is Actually What You Need for Vision (CVPR 2022 Oral)☆1,293Updated 5 months ago
- An All-MLP solution for Vision, from Google AI☆1,001Updated last month
- Explainability for Vision Transformers☆850Updated 2 years ago
- ConvMAE: Masked Convolution Meets Masked Autoencoders☆483Updated last year
- MultiMAE: Multi-modal Multi-task Masked Autoencoders, ECCV 2022☆548Updated last year
- iBOT : Image BERT Pre-Training with Online Tokenizer (ICLR 2022)☆676Updated 2 years ago
- ICCV2021, Tokens-to-Token ViT: Training Vision Transformers from Scratch on ImageNet☆1,149Updated last year
- MetaFormer Baselines for Vision (TPAMI 2024)☆417Updated 5 months ago
- This is an official implementation of CvT: Introducing Convolutions to Vision Transformers.☆556Updated last year
- [NeurIPS 2021 Spotlight] Official code for "Focal Self-attention for Local-Global Interactions in Vision Transformers"☆544Updated 2 years ago
- PyTorch implementation of SimSiam https//arxiv.org/abs/2011.10566☆1,159Updated last year
- Self-supervised vIsion Transformer (SiT)☆324Updated last year
- Code release for ConvNeXt V2 model☆1,519Updated 2 months ago
- Recent Transformer-based CV and related works.☆1,320Updated last year
- ☆438Updated last year
- [NeurIPS 2021] [T-PAMI] DynamicViT: Efficient Vision Transformers with Dynamic Token Sparsification☆571Updated last year
- Dense Contrastive Learning (DenseCL) for self-supervised representation learning, CVPR 2021 Oral.☆546Updated 10 months ago
- A simple way to keep track of an Exponential Moving Average (EMA) version of your Pytorch model☆511Updated 2 weeks ago
- Pytorch reimplementation of the Vision Transformer (An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale)☆1,934Updated 2 years ago
- Official Pytorch Implementation of: "ImageNet-21K Pretraining for the Masses"(NeurIPS, 2021) paper☆732Updated last year