JingzhaoZhang / why-clipping-accelerates
A pytorch implementation for the LSTM experiments in the paper: Why Gradient Clipping Accelerates Training: A Theoretical Justification for Adaptivity
☆45Updated 5 years ago
Alternatives and similar repositories for why-clipping-accelerates:
Users that are interested in why-clipping-accelerates are comparing it to the libraries listed below
- This repository is no longer maintained. Check☆81Updated 4 years ago
- Code base for SRSGD.☆28Updated 5 years ago
- Computing the eigenvalues of Neural Tangent Kernel and Conjugate Kernel (aka NNGP kernel) over the boolean cube☆47Updated 5 years ago
- An adaptive training algorithm for residual network☆15Updated 4 years ago
- Code for Self-Tuning Networks (ICLR 2019) https://arxiv.org/abs/1903.03088☆53Updated 5 years ago
- Low-variance, efficient and unbiased gradient estimation for optimizing models with binary latent variables. (ICLR 2019)☆28Updated 6 years ago
- Geometric Certifications of Neural Nets☆41Updated 2 years ago
- Code for "Supermasks in Superposition"☆121Updated last year
- [JMLR] TRADES + random smoothing for certifiable robustness☆14Updated 4 years ago
- [ICLR 2020] FSPool: Learning Set Representations with Featurewise Sort Pooling☆42Updated last year
- SGD and Ordered SGD codes for deep learning, SVM, and logistic regression☆35Updated 4 years ago
- An implementation of shampoo☆74Updated 7 years ago
- Computing various norms/measures on over-parametrized neural networks☆49Updated 6 years ago
- Limitations of the Empirical Fisher Approximation☆47Updated 3 weeks ago
- Net2Net implementation on PyTorch for any possible vision layers.☆38Updated 7 years ago
- Monotone operator equilibrium networks☆51Updated 4 years ago
- ☆32Updated 5 years ago
- Gradients as Features for Deep Representation Learning☆43Updated 5 years ago
- Recurrent Back Propagation, Back Propagation Through Optimization, ICML 2018☆41Updated 6 years ago
- ☆45Updated 5 years ago
- PyTorch Implementations of Dropout Variants☆87Updated 7 years ago
- Implementation of the Deep Frank-Wolfe Algorithm -- Pytorch☆62Updated 4 years ago
- Implementation of Methods Proposed in Preventing Gradient Attenuation in Lipschitz Constrained Convolutional Networks (NeurIPS 2019)☆34Updated 4 years ago
- Reparameterize your PyTorch modules☆70Updated 4 years ago
- ☆31Updated 4 years ago
- Implementation of Information Dropout☆39Updated 7 years ago
- Official PyTorch code release for Implicit Gradient Transport, NeurIPS'19☆21Updated 5 years ago
- This repo contains the code used for NeurIPS 2019 paper "Asymmetric Valleys: Beyond Sharp and Flat Local Minima".☆14Updated 5 years ago
- The original code for the paper "How to train your MAML" along with a replication of the original "Model Agnostic Meta Learning" (MAML) p…☆40Updated 4 years ago
- ☆35Updated last year