Tony-Y / pytorch_warmup
Learning Rate Warmup in PyTorch
☆392Updated this week
Related projects ⓘ
Alternatives and complementary repositories for pytorch_warmup
- ☆440Updated last year
- Tiny PyTorch library for maintaining a moving average of a collection of parameters.☆406Updated last month
- Gradually-Warmup Learning Rate Scheduler for PyTorch☆977Updated last month
- NFNets and Adaptive Gradient Clipping for SGD implemented in PyTorch. Find explanation at tourdeml.github.io/blog/☆345Updated 10 months ago
- 🛠 Toolbox to extend PyTorch functionalities☆417Updated 6 months ago
- This is an official implementation for "Self-Supervised Learning with Swin Transformers".☆627Updated 3 years ago
- Implementation of ConvMixer for "Patches Are All You Need? 🤷"☆1,062Updated 2 years ago
- Compute CNN receptive field size in pytorch in one line☆349Updated 6 months ago
- Implementation of Linformer for Pytorch☆257Updated 10 months ago
- An All-MLP solution for Vision, from Google AI☆1,003Updated 2 months ago
- (ICLR 2022 Spotlight) Official PyTorch implementation of "How Do Vision Transformers Work?"☆806Updated 2 years ago
- An (unofficial) implementation of Focal Loss, as described in the RetinaNet paper, generalized to the multi-class case.☆225Updated 9 months ago
- Unofficial implementation of Google's FNet: Mixing Tokens with Fourier Transforms☆251Updated 3 years ago
- EsViT: Efficient self-supervised Vision Transformers☆408Updated last year
- A PyTorch Implementation of Focal Loss.☆962Updated 5 years ago
- PyTorch Implementation of CvT: Introducing Convolutions to Vision Transformers☆224Updated 3 years ago
- Implementation of Transformer in Transformer, pixel level attention paired with patch level attention for image classification, in Pytorc…☆300Updated 2 years ago
- Escaping the Big Data Paradigm with Compact Transformers, 2021 (Train your Vision Transformers in 30 mins on CIFAR-10 with a single GPU!)☆500Updated 2 weeks ago
- Implementing Stand-Alone Self-Attention in Vision Models using Pytorch☆454Updated 4 years ago
- Code for the Convolutional Vision Transformer (ConViT)☆462Updated 3 years ago
- A general and accurate MACs / FLOPs profiler for PyTorch models☆571Updated 6 months ago
- torchsummaryX: Improved visualization tool of torchsummary☆301Updated 2 years ago
- Is the attention layer even necessary? (https://arxiv.org/abs/2105.02723)☆480Updated 3 years ago
- [NeurIPS 2021 Spotlight] Official code for "Focal Self-attention for Local-Global Interactions in Vision Transformers"