loshchil / AdamW-and-SGDWLinks
Decoupled Weight Decay Regularization (ICLR 2019)
☆282Updated 6 years ago
Alternatives and similar repositories for AdamW-and-SGDW
Users that are interested in AdamW-and-SGDW are comparing it to the libraries listed below
Sorting:
- ☆252Updated 8 years ago
- Implements pytorch code for the Accelerated SGD algorithm.☆215Updated 7 years ago
- Implementation for the Lookahead Optimizer.☆243Updated 3 years ago
- Implementations of ideas from recent papers☆392Updated 4 years ago
- Code for experiments regarding importance sampling for training neural networks☆329Updated 3 years ago
- Code for "The Reversible Residual Network: Backpropagation Without Storing Activations"☆362Updated 7 years ago
- Experimental ground for optimizing memory of pytorch models☆367Updated 7 years ago
- Totally Versatile Miscellanea for Pytorch☆475Updated 3 years ago
- Code to reproduce some of the figures in the paper "On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima"☆145Updated 8 years ago
- A Re-implementation of Fixed-update Initialization☆155Updated 6 years ago
- Robust Bi-Tempered Logistic Loss Based on Bregman Divergences. https://arxiv.org/pdf/1906.03361.pdf☆147Updated 3 years ago
- pytorch implement of Lookahead Optimizer☆195Updated 3 years ago
- Complementary code for the Targeted Dropout paper☆254Updated 6 years ago
- A machine learning library for PyTorch☆94Updated 3 years ago
- ☆165Updated 6 years ago
- Experiments with Adam/AdamW/amsgrad☆201Updated 7 years ago
- A plug-in replacement for DataLoader to load Imagenet disk-sequentially in PyTorch.☆239Updated 4 years ago
- Snapshot Ensembles in Torch (Snapshot Ensembles: Train 1, Get M for Free)☆188Updated 8 years ago
- ☆219Updated 7 years ago
- Hypergradient descent☆148Updated last year
- Implementation and experiments for AdamW on Pytorch☆94Updated 5 years ago
- Code used to generate the results appearing in "Train longer, generalize better: closing the generalization gap in large batch training o…☆149Updated 8 years ago
- PyTorch implementations of LSTM Variants (Dropout + Layer Norm)☆137Updated 4 years ago
- ☆133Updated 8 years ago
- Sparse Variational Dropout, ICML 2017☆313Updated 5 years ago
- Utilities for Pytorch☆88Updated 3 years ago
- 2.86% and 15.85% on CIFAR-10 and CIFAR-100☆297Updated 7 years ago
- Example code for the paper "Understanding deep learning requires rethinking generalization"☆178Updated 5 years ago
- hessian in pytorch☆187Updated 5 years ago
- Release of CIFAR-10.1, a new test set for CIFAR-10.☆224Updated 5 years ago