Tony-Y / pytorch_warmupLinks
Learning Rate Warmup in PyTorch
☆411Updated 2 months ago
Alternatives and similar repositories for pytorch_warmup
Users that are interested in pytorch_warmup are comparing it to the libraries listed below
Sorting:
- Tiny PyTorch library for maintaining a moving average of a collection of parameters.☆435Updated 11 months ago
- ☆465Updated 2 years ago
- Gradually-Warmup Learning Rate Scheduler for PyTorch☆992Updated 11 months ago
- Implementation of ConvMixer for "Patches Are All You Need? 🤷"☆1,077Updated 2 years ago
- A PyTorch implementation of the 1d and 2d Sinusoidal positional encoding/embedding.☆257Updated 4 years ago
- An All-MLP solution for Vision, from Google AI☆1,038Updated 2 months ago
- Compute CNN receptive field size in pytorch in one line☆362Updated last year
- An (unofficial) implementation of Focal Loss, as described in the RetinaNet paper, generalized to the multi-class case.☆238Updated last year
- Implementation of Transformer in Transformer, pixel level attention paired with patch level attention for image classification, in Pytorc…☆307Updated 3 years ago
- 🛠 Toolbox to extend PyTorch functionalities☆420Updated last year
- Code for the Convolutional Vision Transformer (ConViT)☆468Updated 3 years ago
- Escaping the Big Data Paradigm with Compact Transformers, 2021 (Train your Vision Transformers in 30 mins on CIFAR-10 with a single GPU!)☆535Updated 10 months ago
- Transformer based on a variant of attention that is linear complexity in respect to sequence length☆797Updated last year
- NFNets and Adaptive Gradient Clipping for SGD implemented in PyTorch. Find explanation at tourdeml.github.io/blog/☆348Updated last year
- A Pytorch-Lightning implementation of self-supervised algorithms☆543Updated 3 years ago
- An implementation of the efficient attention module.☆320Updated 4 years ago
- Implementation of Linformer for Pytorch☆298Updated last year
- This is an official implementation for "Self-Supervised Learning with Swin Transformers".☆658Updated 4 years ago
- (ICLR 2022 Spotlight) Official PyTorch implementation of "How Do Vision Transformers Work?"☆821Updated 3 years ago
- Unofficial implementation of Google's FNet: Mixing Tokens with Fourier Transforms☆259Updated 4 years ago
- Implementation of Axial attention - attending to multi-dimensional data efficiently☆385Updated 4 years ago
- Self-supervised vIsion Transformer (SiT)☆337Updated 2 years ago
- PyTorch Implementation of CvT: Introducing Convolutions to Vision Transformers☆226Updated 4 years ago
- Seamless analysis of your PyTorch models (RAM usage, FLOPs, MACs, receptive field, etc.)☆221Updated 5 months ago
- Is the attention layer even necessary? (https://arxiv.org/abs/2105.02723)☆485Updated 4 years ago
- A PyTorch implementation of "CoAtNet: Marrying Convolution and Attention for All Data Sizes"☆389Updated 3 years ago
- ☆261Updated 5 years ago
- Ranger deep learning optimizer rewrite to use newest components☆334Updated last year
- Standalone TFRecord reader/writer with PyTorch data loaders☆891Updated 3 months ago
- torchsummaryX: Improved visualization tool of torchsummary☆303Updated 3 years ago