fKunstner / noise-sgd-adam-signLinks
☆16Updated 2 years ago
Alternatives and similar repositories for noise-sgd-adam-sign
Users that are interested in noise-sgd-adam-sign are comparing it to the libraries listed below
Sorting:
- Towards Understanding Sharpness-Aware Minimization [ICML 2022]☆35Updated 3 years ago
- Measurements of Three-Level Hierarchical Structure in the Outliers in the Spectrum of Deepnet Hessians (ICML 2019)☆17Updated 6 years ago
- ☆17Updated last year
- Training vision models with full-batch gradient descent and regularization☆37Updated 2 years ago
- A modern look at the relationship between sharpness and generalization [ICML 2023]☆43Updated last year
- Implementations of orthogonal and semi-orthogonal convolutions in the Fourier domain with applications to adversarial robustness☆47Updated 4 years ago
- Simple CIFAR10 ResNet example with JAX.☆23Updated 4 years ago
- ☆37Updated last year
- ☆34Updated last year
- Source code of "What can linearized neural networks actually say about generalization?☆20Updated 3 years ago
- ☆28Updated 2 years ago
- Contains code for the NeurIPS 2020 paper by Pan et al., "Continual Deep Learning by FunctionalRegularisation of Memorable Past"☆44Updated 4 years ago
- SGD with large step sizes learns sparse features [ICML 2023]☆33Updated 2 years ago
- ☆55Updated 5 years ago
- Official code for "In Search of Robust Measures of Generalization" (NeurIPS 2020)☆28Updated 4 years ago
- Computing various measures and generalization bounds on convolutional and fully connected networks☆35Updated 6 years ago
- Last-layer Laplace approximation code examples☆83Updated 3 years ago
- Simple data balancing baselines for worst-group-accuracy benchmarks.☆42Updated last year
- ☆23Updated 2 years ago
- ☆58Updated 2 years ago
- Code to implement the AND-mask and geometric mean to do gradient based optimization, from the paper "Learning explanations that are hard …☆40Updated 4 years ago
- ☆34Updated 3 years ago
- Pytorch code for "Improving Self-Supervised Learning by Characterizing Idealized Representations"☆41Updated 2 years ago
- Code to reproduce experiments from 'Does Knowledge Distillation Really Work' a paper which appeared in the 2021 NeurIPS proceedings.☆33Updated last year
- Code for "The Intrinsic Dimension of Images and Its Impact on Learning" - ICLR 2021 Spotlight https://openreview.net/forum?id=XJk19XzGq2J☆71Updated last year
- An Investigation of Why Overparameterization Exacerbates Spurious Correlations☆31Updated 5 years ago
- ☆108Updated last year
- ☆70Updated 8 months ago
- The Pitfalls of Simplicity Bias in Neural Networks [NeurIPS 2020] (http://arxiv.org/abs/2006.07710v2)☆41Updated last year
- Distilling Model Failures as Directions in Latent Space☆47Updated 2 years ago