katsura-jp / pytorch-cosine-annealing-with-warmup
☆458Updated 2 years ago
Alternatives and similar repositories for pytorch-cosine-annealing-with-warmup:
Users that are interested in pytorch-cosine-annealing-with-warmup are comparing it to the libraries listed below
- Tiny PyTorch library for maintaining a moving average of a collection of parameters.☆428Updated 7 months ago
- Learning Rate Warmup in PyTorch☆410Updated last month
- (ICLR 2022 Spotlight) Official PyTorch implementation of "How Do Vision Transformers Work?"☆815Updated 2 years ago
- Gradually-Warmup Learning Rate Scheduler for PyTorch☆989Updated 6 months ago
- SAM: Sharpness-Aware Minimization (PyTorch)☆1,867Updated last year
- A simple way to keep track of an Exponential Moving Average (EMA) version of your Pytorch model☆577Updated 5 months ago
- Implementation of ConvMixer for "Patches Are All You Need? 🤷"☆1,071Updated 2 years ago
- An All-MLP solution for Vision, from Google AI☆1,020Updated 7 months ago
- Escaping the Big Data Paradigm with Compact Transformers, 2021 (Train your Vision Transformers in 30 mins on CIFAR-10 with a single GPU!)☆526Updated 6 months ago
- A LARS implementation in PyTorch☆345Updated 5 years ago
- Unofficial PyTorch implementation of "Meta Pseudo Labels"☆387Updated last year
- NFNets and Adaptive Gradient Clipping for SGD implemented in PyTorch. Find explanation at tourdeml.github.io/blog/☆345Updated last year
- AdamP: Slowing Down the Slowdown for Momentum Optimizers on Scale-invariant Weights (ICLR 2021)☆414Updated 4 years ago
- iBOT : Image BERT Pre-Training with Online Tokenizer (ICLR 2022)☆721Updated 3 years ago
- PyTorch implementation of MoCo v3 https//arxiv.org/abs/2104.02057☆1,264Updated 3 years ago
- Transformer based on a variant of attention that is linear complexity in respect to sequence length☆763Updated last year
- EsViT: Efficient self-supervised Vision Transformers☆410Updated last year
- PyTorch implementation of SimSiam https//arxiv.org/abs/2011.10566☆1,192Updated 2 years ago
- Usable Implementation of "Bootstrap Your Own Latent" self-supervised learning, from Deepmind, in Pytorch☆1,815Updated 9 months ago
- Explainability for Vision Transformers☆949Updated 3 years ago
- Self-supervised vIsion Transformer (SiT)☆328Updated 2 years ago
- Masked Siamese Networks for Label-Efficient Learning (https://arxiv.org/abs/2204.07141)☆456Updated 2 years ago
- Implementation of Perceiver, General Perception with Iterative Attention, in Pytorch☆1,138Updated last year
- Code for the Convolutional Vision Transformer (ConViT)☆466Updated 3 years ago
- Ranger deep learning optimizer rewrite to use newest components☆329Updated last year
- PyTorch implementation of Bootstrap Your Own Latent: A New Approach to Self-Supervised Learning☆491Updated 2 years ago
- Unofficial PyTorch Reimplementation of RandAugment.☆637Updated 2 years ago
- Unofficial implementation of MLP-Mixer: An all-MLP Architecture for Vision☆217Updated 4 years ago
- A PyTorch implementation of "CoAtNet: Marrying Convolution and Attention for All Data Sizes"☆381Updated 3 years ago
- Implementation of Transformer in Transformer, pixel level attention paired with patch level attention for image classification, in Pytorc…☆304Updated 3 years ago