fadel / pytorch_emaLinks
Tiny PyTorch library for maintaining a moving average of a collection of parameters.
☆435Updated 10 months ago
Alternatives and similar repositories for pytorch_ema
Users that are interested in pytorch_ema are comparing it to the libraries listed below
Sorting:
- Learning Rate Warmup in PyTorch☆411Updated 2 months ago
- ☆465Updated 2 years ago
- A simple way to keep track of an Exponential Moving Average (EMA) version of your Pytorch model☆601Updated 8 months ago
- A LARS implementation in PyTorch☆349Updated 5 years ago
- A PyTorch implementation of the 1d and 2d Sinusoidal positional encoding/embedding.☆253Updated 4 years ago
- A Pytorch-Lightning implementation of self-supervised algorithms☆543Updated 3 years ago
- (ICLR 2022 Spotlight) Official PyTorch implementation of "How Do Vision Transformers Work?"☆819Updated 3 years ago
- Implementation of ConvMixer for "Patches Are All You Need? 🤷"☆1,077Updated 2 years ago
- Escaping the Big Data Paradigm with Compact Transformers, 2021 (Train your Vision Transformers in 30 mins on CIFAR-10 with a single GPU!)☆536Updated 9 months ago
- NFNets and Adaptive Gradient Clipping for SGD implemented in PyTorch. Find explanation at tourdeml.github.io/blog/☆347Updated last year
- Implementation of Axial attention - attending to multi-dimensional data efficiently☆385Updated 3 years ago
- Gradually-Warmup Learning Rate Scheduler for PyTorch☆991Updated 10 months ago
- Implementation of Perceiver, General Perception with Iterative Attention, in Pytorch☆1,165Updated 2 years ago
- An All-MLP solution for Vision, from Google AI☆1,036Updated last month
- Transformer based on a variant of attention that is linear complexity in respect to sequence length☆794Updated last year
- An implementation of 1D, 2D, and 3D positional encoding in Pytorch and TensorFlow☆603Updated 10 months ago
- Implementation of 1D, 2D, and 3D FFT convolutions in PyTorch. Much faster than direct convolutions for large kernel sizes.☆501Updated last year
- Implementation of Transformer in Transformer, pixel level attention paired with patch level attention for image classification, in Pytorc…☆305Updated 3 years ago
- Collection of PyTorch Lightning implementations of Generative Adversarial Network varieties presented in research papers.☆169Updated 5 months ago
- Is the attention layer even necessary? (https://arxiv.org/abs/2105.02723)☆486Updated 4 years ago
- ☆605Updated 2 months ago
- The correct way to resize images or tensors. For Numpy or Pytorch (differentiable).☆560Updated 2 years ago
- PyTorch implementation of Barlow Twins.☆989Updated 3 years ago
- Adan: Adaptive Nesterov Momentum Algorithm for Faster Optimizing Deep Models☆797Updated 2 months ago
- EsViT: Efficient self-supervised Vision Transformers☆413Updated last year
- Masked Siamese Networks for Label-Efficient Learning (https://arxiv.org/abs/2204.07141)☆458Updated 3 years ago
- This is a pytorch implementation of k-means clustering algorithm☆321Updated 5 months ago
- PyTorch implementation of SimCLR: supports multi-GPU training and closely reproduces results☆208Updated last year
- Implementation of Linformer for Pytorch☆295Updated last year
- Ranger deep learning optimizer rewrite to use newest components☆333Updated last year