SHI-Labs / Compact-TransformersLinks
Escaping the Big Data Paradigm with Compact Transformers, 2021 (Train your Vision Transformers in 30 mins on CIFAR-10 with a single GPU!)
β536Updated 11 months ago
Alternatives and similar repositories for Compact-Transformers
Users that are interested in Compact-Transformers are comparing it to the libraries listed below
Sorting:
- Implementation of ConvMixer for "Patches Are All You Need? π€·"β1,077Updated 2 years ago
- Code for the Convolutional Vision Transformer (ConViT)β470Updated 4 years ago
- This is an official implementation of CvT: Introducing Convolutions to Vision Transformers.β583Updated 2 years ago
- (ICLR 2022 Spotlight) Official PyTorch implementation of "How Do Vision Transformers Work?"β819Updated 3 years ago
- An All-MLP solution for Vision, from Google AIβ1,050Updated 3 months ago
- EsViT: Efficient self-supervised Vision Transformersβ412Updated 2 years ago
- A PyTorch implementation of "CoAtNet: Marrying Convolution and Attention for All Data Sizes"β392Updated 4 years ago
- β605Updated 2 months ago
- Official Pytorch Implementation of: "ImageNet-21K Pretraining for the Masses"(NeurIPS, 2021) paperβ774Updated 2 years ago
- Masked Siamese Networks for Label-Efficient Learning (https://arxiv.org/abs/2204.07141)β461Updated 3 years ago
- NFNets and Adaptive Gradient Clipping for SGD implemented in PyTorch. Find explanation at tourdeml.github.io/blog/β349Updated last year
- Learning Rate Warmup in PyTorchβ413Updated 4 months ago
- Self-supervised vIsion Transformer (SiT)β338Updated 2 years ago
- Implementation of Visual Transformer for Small-size Datasetsβ126Updated 3 years ago
- PoolFormer: MetaFormer Is Actually What You Need for Vision (CVPR 2022 Oral)β1,356Updated last year
- β466Updated 2 years ago
- Is the attention layer even necessary? (https://arxiv.org/abs/2105.02723)β485Updated 4 years ago
- [NeurIPS 2021 Spotlight] Official code for "Focal Self-attention for Local-Global Interactions in Vision Transformers"β559Updated 3 years ago
- Official PyTorch implementation of Fully Attentional Networksβ480Updated 2 years ago
- Tiny PyTorch library for maintaining a moving average of a collection of parameters.β438Updated last year
- [ECCV 2022] Official repository for "MaxViT: Multi-Axis Vision Transformer". SOTA foundation models for classification, detection, segmenβ¦β487Updated 2 years ago
- Implementation of Transformer in Transformer, pixel level attention paired with patch level attention for image classification, in Pytorcβ¦β309Updated 3 years ago
- LeViT a Vision Transformer in ConvNet's Clothing for Faster Inferenceβ619Updated 3 years ago
- ICCV2021, Tokens-to-Token ViT: Training Vision Transformers from Scratch on ImageNetβ1,189Updated 2 years ago
- Documentation for Ross Wightman's timm image model libraryβ315Updated last year
- This is an official implementation for "Self-Supervised Learning with Swin Transformers".β663Updated 4 years ago
- A Pytorch Knowledge Distillation library for benchmarking and extending works in the domains of Knowledge Distillation, Pruning, and Quanβ¦β646Updated 2 years ago
- Unofficial implementation of MLP-Mixer: An all-MLP Architecture for Visionβ218Updated 4 years ago
- Ranger deep learning optimizer rewrite to use newest componentsβ338Updated last year
- Compare neural networks by their feature similarityβ373Updated 2 years ago