eBay / AutoOpt
Automatic and Simultaneous Adjustment of Learning Rate and Momentum for Stochastic Gradient Descent
☆45Updated 4 years ago
Alternatives and similar repositories for AutoOpt:
Users that are interested in AutoOpt are comparing it to the libraries listed below
- [NeurIPS'19] [PyTorch] Adaptive Regularization in NN☆68Updated 5 years ago
- Repository with code for paper "Inhibited Softmax for Uncertainty Estimation in Neural Networks"☆25Updated 5 years ago
- ☆61Updated 2 years ago
- Code base for SRSGD.☆28Updated 5 years ago
- Using / reproducing DAC from the paper "Disentangled Attribution Curves for Interpreting Random Forests and Boosted Trees"☆27Updated 4 years ago
- ☆34Updated 6 years ago
- Pretrained TorchVision models on CIFAR10 dataset (with weights)☆24Updated 4 years ago
- Official repository for "Bridging Adversarial Robustness and Gradient Interpretability".☆30Updated 5 years ago
- Code to accompany the paper Radial Bayesian Neural Networks: Beyond Discrete Support In Large-Scale Bayesian Deep Learning☆33Updated 4 years ago
- ☆11Updated 5 years ago
- Implementation of the Deep Frank-Wolfe Algorithm -- Pytorch☆62Updated 4 years ago
- Computing various norms/measures on over-parametrized neural networks☆49Updated 6 years ago
- A pytorch implementation for the LSTM experiments in the paper: Why Gradient Clipping Accelerates Training: A Theoretical Justification f…☆45Updated 5 years ago
- An implementation of shampoo☆74Updated 7 years ago
- Code for "Aggregated Momentum: Stability Through Passive Damping", Lucas et al. 2018☆34Updated 6 years ago
- ☆45Updated 5 years ago
- This repo contains the code used for NeurIPS 2019 paper "Asymmetric Valleys: Beyond Sharp and Flat Local Minima".☆14Updated 5 years ago
- Code release to reproduce ASHA experiments from "Random Search and Reproducibility for NAS."☆22Updated 5 years ago
- The Deep Weight Prior, ICLR 2019☆44Updated 3 years ago
- Code for Self-Tuning Networks (ICLR 2019) https://arxiv.org/abs/1903.03088☆53Updated 5 years ago
- Code for our paper: "Regularity Normalization: Neuroscience-Inspired Unsupervised Attention across Neural Network Layers".☆21Updated 3 years ago
- Implementation of Information Dropout☆39Updated 7 years ago
- Computing the eigenvalues of Neural Tangent Kernel and Conjugate Kernel (aka NNGP kernel) over the boolean cube☆47Updated 5 years ago
- Successfully training approximations to full-rank matrices for efficiency in deep learning.☆17Updated 4 years ago
- ☆24Updated 11 months ago
- This code reproduces the results of the paper, "Measuring Data Leakage in Machine-Learning Models with Fisher Information"☆50Updated 3 years ago
- Parameter-Space Saliency Maps for Explainability☆23Updated 2 years ago
- "Learning Rate Dropout" in PyTorch☆34Updated 5 years ago
- Keras implementation of Padam from "Closing the Generalization Gap of Adaptive Gradient Methods in Training Deep Neural Networks"☆17Updated 6 years ago
- Mathematical consequences of orthogonal weights initialization and regularization in deep learning. Experiments with gain-adjusted orthog…☆17Updated 5 years ago