alphadl / lookahead.pytorchLinks
lookahead optimizer (Lookahead Optimizer: k steps forward, 1 step back) for pytorch
☆337Updated 6 years ago
Alternatives and similar repositories for lookahead.pytorch
Users that are interested in lookahead.pytorch are comparing it to the libraries listed below
Sorting:
- pytorch implement of Lookahead Optimizer☆195Updated 3 years ago
- Official PyTorch Repo for "ReZero is All You Need: Fast Convergence at Large Depth"☆415Updated last year
- A New Optimization Technique for Deep Neural Networks☆540Updated 3 years ago
- Stochastic Weight Averaging in PyTorch☆977Updated 4 years ago
- Implementation and experiments for AdamW on Pytorch☆94Updated 5 years ago
- A PyTorch implementation of " EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks."☆313Updated 5 years ago
- 🛠 Toolbox to extend PyTorch functionalities☆419Updated last year
- Implements https://arxiv.org/abs/1711.05101 AdamW optimizer, cosine learning rate scheduler and "Cyclical Learning Rates for Training Neu…☆152Updated 6 years ago
- Implementation of the mixup training method☆467Updated 7 years ago
- PyTorch implementation of AutoAugment.☆159Updated 5 years ago
- torchsummaryX: Improved visualization tool of torchsummary☆303Updated 3 years ago
- Over9000 optimizer☆424Updated 2 years ago
- A large scale study of Knowledge Distillation.☆220Updated 5 years ago
- Implementations of ideas from recent papers☆392Updated 4 years ago
- Deep Learning Experiment Management☆641Updated 2 years ago
- Ranger - a synergistic optimizer using RAdam (Rectified Adam), Gradient Centralization and LookAhead in one codebase☆1,207Updated last year
- Knowledge distillation methods implemented with Tensorflow (now there are 11 (+1) methods, and will be added more.)☆265Updated 5 years ago
- High-level batteries-included neural network training library for Pytorch☆403Updated 3 years ago
- Implementation of DropBlock: A regularization method for convolutional networks in PyTorch.☆596Updated 5 years ago
- Accelerate training by storing parameters in one contiguous chunk of memory.☆293Updated 4 years ago
- Pytorch implementation of the paper "Class-Balanced Loss Based on Effective Number of Samples"☆800Updated last year
- Standardizing weights to accelerate micro-batch training☆551Updated 3 years ago
- Simple package that makes your generator work in background thread☆282Updated 3 years ago
- Mish Deep Learning Activation Function for PyTorch / FastAI☆161Updated 5 years ago
- Official implementation of 'FMix: Enhancing Mixed Sample Data Augmentation'☆338Updated 4 years ago
- Debug PyTorch code using PySnooper☆801Updated 4 years ago
- Implementation of https://arxiv.org/abs/1904.00962☆377Updated 4 years ago
- Gradually-Warmup Learning Rate Scheduler for PyTorch☆991Updated last year
- PyTorch Implementation of Focal Loss and Lovasz-Softmax Loss☆337Updated 3 years ago
- Sublinear memory optimization for deep learning. https://arxiv.org/abs/1604.06174☆604Updated 5 years ago