fKunstner / noise-sgd-adam-signLinks
☆17Updated 2 years ago
Alternatives and similar repositories for noise-sgd-adam-sign
Users that are interested in noise-sgd-adam-sign are comparing it to the libraries listed below
Sorting:
- Towards Understanding Sharpness-Aware Minimization [ICML 2022]☆35Updated 3 years ago
- ☆28Updated 2 years ago
- Source code of "What can linearized neural networks actually say about generalization?☆20Updated 3 years ago
- Training vision models with full-batch gradient descent and regularization☆38Updated 2 years ago
- Simple CIFAR10 ResNet example with JAX.☆23Updated 4 years ago
- A modern look at the relationship between sharpness and generalization [ICML 2023]☆43Updated 2 years ago
- Measurements of Three-Level Hierarchical Structure in the Outliers in the Spectrum of Deepnet Hessians (ICML 2019)☆16Updated 6 years ago
- ☆17Updated last year
- Computing various measures and generalization bounds on convolutional and fully connected networks☆35Updated 6 years ago
- ☆34Updated last year
- ☆55Updated 5 years ago
- CIFAR-5m dataset☆39Updated 4 years ago
- Code to implement the AND-mask and geometric mean to do gradient based optimization, from the paper "Learning explanations that are hard …☆41Updated 4 years ago
- Pytorch code for "Improving Self-Supervised Learning by Characterizing Idealized Representations"☆41Updated 2 years ago
- Implementations of orthogonal and semi-orthogonal convolutions in the Fourier domain with applications to adversarial robustness☆47Updated 4 years ago
- ☆58Updated 2 years ago
- ☆109Updated 2 years ago
- Code for "The Intrinsic Dimension of Images and Its Impact on Learning" - ICLR 2021 Spotlight https://openreview.net/forum?id=XJk19XzGq2J☆71Updated last year
- The Pitfalls of Simplicity Bias in Neural Networks [NeurIPS 2020] (http://arxiv.org/abs/2006.07710v2)☆41Updated last year
- [NeurIPS 2021] A Geometric Analysis of Neural Collapse with Unconstrained Features☆59Updated 3 years ago
- Simple data balancing baselines for worst-group-accuracy benchmarks.☆42Updated last year
- The Full Spectrum of Deepnet Hessians at Scale: Dynamics with SGD Training and Sample Size☆17Updated 6 years ago
- Code to reproduce experiments from 'Does Knowledge Distillation Really Work' a paper which appeared in the 2021 NeurIPS proceedings.☆33Updated 2 years ago
- Code for Knowledge-Adaptation Priors based on the NeurIPS 2021 paper by Khan and Swaroop.☆16Updated 3 years ago
- Official code for "In Search of Robust Measures of Generalization" (NeurIPS 2020)☆28Updated 4 years ago
- ☆54Updated last year
- ☆23Updated 2 years ago
- ☆70Updated 10 months ago
- Contains code for the NeurIPS 2020 paper by Pan et al., "Continual Deep Learning by FunctionalRegularisation of Memorable Past"☆44Updated 4 years ago
- An Investigation of Why Overparameterization Exacerbates Spurious Correlations☆30Updated 5 years ago