loshchil / AdamW-and-SGDW
Decoupled Weight Decay Regularization (ICLR 2019)
☆270Updated 6 years ago
Alternatives and similar repositories for AdamW-and-SGDW:
Users that are interested in AdamW-and-SGDW are comparing it to the libraries listed below
- Implementations of ideas from recent papers☆391Updated 4 years ago
- ☆251Updated 8 years ago
- Totally Versatile Miscellanea for Pytorch☆470Updated 2 years ago
- Implementation for the Lookahead Optimizer.☆240Updated 2 years ago
- Code to reproduce some of the figures in the paper "On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima"☆138Updated 7 years ago
- Snapshot Ensembles in Torch (Snapshot Ensembles: Train 1, Get M for Free)☆190Updated 7 years ago
- 2.86% and 15.85% on CIFAR-10 and CIFAR-100☆296Updated 6 years ago
- Implements pytorch code for the Accelerated SGD algorithm.☆215Updated 6 years ago
- pytorch implement of Lookahead Optimizer☆189Updated 2 years ago
- A PyTorch implementation of Learning to learn by gradient descent by gradient descent☆311Updated 6 years ago
- Experimental ground for optimizing memory of pytorch models☆364Updated 6 years ago
- Code for experiments regarding importance sampling for training neural networks☆325Updated 3 years ago
- Stochastic Weight Averaging in PyTorch☆968Updated 3 years ago
- Release of CIFAR-10.1, a new test set for CIFAR-10.☆222Updated 4 years ago
- PyTorch and Tensorflow functional model definitions☆586Updated 7 years ago
- Robust Bi-Tempered Logistic Loss Based on Bregman Divergences. https://arxiv.org/pdf/1906.03361.pdf☆148Updated 3 years ago
- A plug-in replacement for DataLoader to load Imagenet disk-sequentially in PyTorch.☆238Updated 3 years ago
- A drop-in replacement for CIFAR-10.☆239Updated 3 years ago
- A Re-implementation of Fixed-update Initialization☆152Updated 5 years ago
- Experiments with Adam/AdamW/amsgrad☆196Updated 6 years ago
- Code for "The Reversible Residual Network: Backpropagation Without Storing Activations"☆356Updated 6 years ago
- lookahead optimizer (Lookahead Optimizer: k steps forward, 1 step back) for pytorch☆334Updated 5 years ago
- Code for reproducing Manifold Mixup results (ICML 2019)☆487Updated 10 months ago
- Implements https://arxiv.org/abs/1711.05101 AdamW optimizer, cosine learning rate scheduler and "Cyclical Learning Rates for Training Neu…☆149Updated 5 years ago
- Code for paper "Learning to Reweight Examples for Robust Deep Learning"☆269Updated 5 years ago
- Apollo: An Adaptive Parameter-wise Diagonal Quasi-Newton Method for Nonconvex Stochastic Optimization☆182Updated 3 years ago
- A New Optimization Technique for Deep Neural Networks☆535Updated 3 years ago
- A smoother activation function (undergrad code)☆108Updated 4 years ago
- ☆352Updated 5 years ago
- 2.56%, 15.20%, 1.30% on CIFAR10, CIFAR100, and SVHN https://arxiv.org/abs/1708.04552☆549Updated 4 years ago