lessw2020 / Ranger21Links
Ranger deep learning optimizer rewrite to use newest components
☆331Updated last year
Alternatives and similar repositories for Ranger21
Users that are interested in Ranger21 are comparing it to the libraries listed below
Sorting:
- Implementation of ConvMixer for "Patches Are All You Need? 🤷"☆1,074Updated 2 years ago
- Collection of the latest, greatest, deep learning optimizers (for Pytorch) - CNN, NLP suitable☆215Updated 4 years ago
- NFNets and Adaptive Gradient Clipping for SGD implemented in PyTorch. Find explanation at tourdeml.github.io/blog/☆348Updated last year
- Implementation of the Adan (ADAptive Nesterov momentum algorithm) Optimizer in Pytorch☆252Updated 2 years ago
- Over9000 optimizer☆426Updated 2 years ago
- Ranger - a synergistic optimizer using RAdam (Rectified Adam), Gradient Centralization and LookAhead in one codebase☆1,202Updated last year
- Is the attention layer even necessary? (https://arxiv.org/abs/2105.02723)☆486Updated 4 years ago
- Implementation of a U-net complete with efficient attention as well as the latest research findings☆282Updated last year
- optimizer & lr scheduler & loss function collections in PyTorch☆304Updated 2 weeks ago
- Learning Rate Warmup in PyTorch☆410Updated last week
- Tiny PyTorch library for maintaining a moving average of a collection of parameters.☆430Updated 8 months ago
- ☆462Updated 2 years ago
- Pre-trained NFNets with 99% of the accuracy of the official paper "High-Performance Large-Scale Image Recognition Without Normalization".☆159Updated 4 years ago
- A library to inspect and extract intermediate layers of PyTorch models.☆473Updated 3 years ago
- Seamless analysis of your PyTorch models (RAM usage, FLOPs, MACs, receptive field, etc.)☆218Updated 3 months ago
- Helps you write algorithms in PyTorch that adapt to the available (CUDA) memory☆438Updated 9 months ago
- Repository for NeurIPS 2020 Spotlight "AdaBelief Optimizer: Adapting stepsizes by the belief in observed gradients"☆1,063Updated 10 months ago
- Implementation of a memory efficient multi-head attention as proposed in the paper, "Self-attention Does Not Need O(n²) Memory"☆379Updated last year
- Implementation of 1D, 2D, and 3D FFT convolutions in PyTorch. Much faster than direct convolutions for large kernel sizes.☆499Updated last year
- Collection of PyTorch Lightning implementations of Generative Adversarial Network varieties presented in research papers.☆169Updated 3 months ago
- Fast, differentiable sorting and ranking in PyTorch☆815Updated 2 weeks ago
- A PyTorch implementation of Sharpness-Aware Minimization for Efficiently Improving Generalization☆135Updated 4 years ago
- Implementation of Nyström Self-attention, from the paper Nyströmformer☆135Updated 3 months ago
- A simple way to keep track of an Exponential Moving Average (EMA) version of your Pytorch model☆592Updated 6 months ago
- The correct way to resize images or tensors. For Numpy or Pytorch (differentiable).☆559Updated last year
- Official PyTorch Implementation of Long-Short Transformer (NeurIPS 2021).☆225Updated 3 years ago
- ☆376Updated last year
- Useful PyTorch functions and modules that are not implemented in PyTorch by default☆188Updated last year
- (ICLR 2022 Spotlight) Official PyTorch implementation of "How Do Vision Transformers Work?"☆817Updated 2 years ago
- Code repository of the paper "Modelling Long Range Dependencies in ND: From Task-Specific to a General Purpose CNN" https://arxiv.org/abs…☆184Updated last month