JingzhaoZhang / why-clipping-acceleratesLinks
A pytorch implementation for the LSTM experiments in the paper: Why Gradient Clipping Accelerates Training: A Theoretical Justification for Adaptivity
☆46Updated 5 years ago
Alternatives and similar repositories for why-clipping-accelerates
Users that are interested in why-clipping-accelerates are comparing it to the libraries listed below
Sorting:
- Reparameterize your PyTorch modules☆71Updated 4 years ago
- Code base for SRSGD.☆29Updated 5 years ago
- This repository is no longer maintained. Check☆81Updated 5 years ago
- Implementation of Methods Proposed in Preventing Gradient Attenuation in Lipschitz Constrained Convolutional Networks (NeurIPS 2019)☆35Updated 5 years ago
- [ICLR 2020] FSPool: Learning Set Representations with Featurewise Sort Pooling☆42Updated last year
- [NeurIPS'19] [PyTorch] Adaptive Regularization in NN☆68Updated 5 years ago
- Code for "Supermasks in Superposition"☆124Updated last year
- The original code for the paper "How to train your MAML" along with a replication of the original "Model Agnostic Meta Learning" (MAML) p…☆41Updated 4 years ago
- Code for Self-Tuning Networks (ICLR 2019) https://arxiv.org/abs/1903.03088☆54Updated 6 years ago
- ☆45Updated 5 years ago
- ☆61Updated 2 years ago
- PyTorch Examples repo for "ReZero is All You Need: Fast Convergence at Large Depth"☆62Updated last year
- PyTorch Implementations of Dropout Variants☆87Updated 7 years ago
- Geometric Certifications of Neural Nets☆42Updated 2 years ago
- [JMLR] TRADES + random smoothing for certifiable robustness☆14Updated 5 years ago
- ☆41Updated 2 years ago
- Low-variance, efficient and unbiased gradient estimation for optimizing models with binary latent variables. (ICLR 2019)☆28Updated 6 years ago
- Implementation of Information Dropout☆39Updated 8 years ago
- Evaluating AlexNet features at various depths☆40Updated 4 years ago
- ☆47Updated 4 years ago
- Ἀνατομή is a PyTorch library to analyze representation of neural networks☆65Updated 2 months ago
- Gradient Starvation: A Learning Proclivity in Neural Networks☆60Updated 4 years ago
- The Limited Multi-Label Projection Layer☆59Updated last year
- MTAdam: Automatic Balancing of Multiple Training Loss Terms☆36Updated 4 years ago
- CIFAR-5m dataset☆39Updated 4 years ago
- Memory efficient MAML using gradient checkpointing☆86Updated 5 years ago
- Implementation of the Deep Frank-Wolfe Algorithm -- Pytorch☆62Updated 4 years ago
- Hybrid Discriminative-Generative Training via Contrastive Learning☆75Updated 2 years ago
- ☆124Updated last year
- Official PyTorch code release for Implicit Gradient Transport, NeurIPS'19☆21Updated 6 years ago