locuslab / convmixerLinks
Implementation of ConvMixer for "Patches Are All You Need? π€·"
β1,077Updated 2 years ago
Alternatives and similar repositories for convmixer
Users that are interested in convmixer are comparing it to the libraries listed below
Sorting:
- (ICLR 2022 Spotlight) Official PyTorch implementation of "How Do Vision Transformers Work?"β821Updated 3 years ago
- An All-MLP solution for Vision, from Google AIβ1,038Updated 2 months ago
- Escaping the Big Data Paradigm with Compact Transformers, 2021 (Train your Vision Transformers in 30 mins on CIFAR-10 with a single GPU!)β535Updated 10 months ago
- Code for the Convolutional Vision Transformer (ConViT)β468Updated 3 years ago
- Neighborhood Attention Transformer, arxiv 2022 / CVPR 2023. Dilated Neighborhood Attention Transformer, arxiv 2022β1,145Updated last year
- PoolFormer: MetaFormer Is Actually What You Need for Vision (CVPR 2022 Oral)β1,351Updated last year
- This is an official implementation of CvT: Introducing Convolutions to Vision Transformers.β582Updated 2 years ago
- [NeurIPS 2021 Spotlight] Official code for "Focal Self-attention for Local-Global Interactions in Vision Transformers"β558Updated 3 years ago
- [ECCV 2022] Official repository for "MaxViT: Multi-Axis Vision Transformer". SOTA foundation models for classification, detection, segmenβ¦β484Updated 2 years ago
- A PyTorch implementation of "CoAtNet: Marrying Convolution and Attention for All Data Sizes"β389Updated 3 years ago
- Masked Siamese Networks for Label-Efficient Learning (https://arxiv.org/abs/2204.07141)β456Updated 3 years ago
- Implementation of various self-attention mechanisms focused on computer vision. Ongoing repository.β1,210Updated 4 years ago
- Adan: Adaptive Nesterov Momentum Algorithm for Faster Optimizing Deep Modelsβ795Updated 3 months ago
- This is an official implementation for "SimMIM: A Simple Framework for Masked Image Modeling".β993Updated 2 years ago
- This is an official implementation for "Self-Supervised Learning with Swin Transformers".β658Updated 4 years ago
- Is the attention layer even necessary? (https://arxiv.org/abs/2105.02723)β485Updated 4 years ago
- Official PyTorch implementation of Fully Attentional Networksβ479Updated 2 years ago
- PyTorch implementation of MoCo v3 https//arxiv.org/abs/2104.02057β1,286Updated 3 years ago
- EsViT: Efficient self-supervised Vision Transformersβ412Updated 2 years ago
- Learning Rate Warmup in PyTorchβ411Updated 2 months ago
- ICCV2021, Tokens-to-Token ViT: Training Vision Transformers from Scratch on ImageNetβ1,190Updated last year
- MultiMAE: Multi-modal Multi-task Masked Autoencoders, ECCV 2022β593Updated 2 years ago
- Implementation of Transformer in Transformer, pixel level attention paired with patch level attention for image classification, in Pytorcβ¦β307Updated 3 years ago
- PyTorch implementation of SimSiam https//arxiv.org/abs/2011.10566β1,211Updated 2 years ago
- EfficientFormerV2 [ICCV 2023] & EfficientFormer [NeurIPs 2022]β1,072Updated 2 years ago
- [ICML 2023] Official PyTorch implementation of Global Context Vision Transformersβ439Updated last year
- PyTorch implementation of Barlow Twins.β990Updated 3 years ago
- [NeurIPS 2022] Official code for "Focal Modulation Networks"β742Updated last year
- β605Updated 3 weeks ago
- iBOT : Image BERT Pre-Training with Online Tokenizer (ICLR 2022)β742Updated 3 years ago