JingzhaoZhang / why-clipping-acceleratesLinks
A pytorch implementation for the LSTM experiments in the paper: Why Gradient Clipping Accelerates Training: A Theoretical Justification for Adaptivity
☆46Updated 5 years ago
Alternatives and similar repositories for why-clipping-accelerates
Users that are interested in why-clipping-accelerates are comparing it to the libraries listed below
Sorting:
- Reparameterize your PyTorch modules☆71Updated 4 years ago
- Code base for SRSGD.☆29Updated 5 years ago
- ☆45Updated 5 years ago
- This repository is no longer maintained. Check☆81Updated 5 years ago
- [NeurIPS'19] [PyTorch] Adaptive Regularization in NN☆68Updated 6 years ago
- Implementation of Methods Proposed in Preventing Gradient Attenuation in Lipschitz Constrained Convolutional Networks (NeurIPS 2019)☆35Updated 5 years ago
- Code for Self-Tuning Networks (ICLR 2019) https://arxiv.org/abs/1903.03088☆54Updated 6 years ago
- Code for "Supermasks in Superposition"☆124Updated 2 years ago
- ☆41Updated 2 years ago
- ☆47Updated 4 years ago
- [JMLR] TRADES + random smoothing for certifiable robustness☆14Updated 5 years ago
- SGD and Ordered SGD codes for deep learning, SVM, and logistic regression☆36Updated 5 years ago
- Gradient Starvation: A Learning Proclivity in Neural Networks☆61Updated 4 years ago
- ☆61Updated 2 years ago
- Evaluating AlexNet features at various depths☆40Updated 5 years ago
- An adaptive training algorithm for residual network☆17Updated 5 years ago
- PyTorch Implementations of Dropout Variants☆87Updated 7 years ago
- Geometric Certifications of Neural Nets☆42Updated 2 years ago
- Implementation of Information Dropout☆39Updated 8 years ago
- Computing the eigenvalues of Neural Tangent Kernel and Conjugate Kernel (aka NNGP kernel) over the boolean cube☆47Updated 6 years ago
- PyTorch Examples repo for "ReZero is All You Need: Fast Convergence at Large Depth"☆62Updated last year
- Ἀνατομή is a PyTorch library to analyze representation of neural networks☆65Updated 4 months ago
- MTAdam: Automatic Balancing of Multiple Training Loss Terms☆36Updated 4 years ago
- The original code for the paper "How to train your MAML" along with a replication of the original "Model Agnostic Meta Learning" (MAML) p…☆41Updated 4 years ago
- Code release to reproduce ASHA experiments from "Random Search and Reproducibility for NAS."☆22Updated 5 years ago
- [ICLR 2020] FSPool: Learning Set Representations with Featurewise Sort Pooling☆41Updated 2 years ago
- ☆124Updated last year
- Implementation of the Deep Frank-Wolfe Algorithm -- Pytorch☆62Updated 4 years ago
- Explores the ideas presented in Deep Ensembles: A Loss Landscape Perspective (https://arxiv.org/abs/1912.02757) by Stanislav Fort, Huiyi …☆66Updated 5 years ago
- ☆45Updated 4 years ago