tml-epfl / understanding-sam
Towards Understanding Sharpness-Aware Minimization [ICML 2022]
☆35Updated 2 years ago
Alternatives and similar repositories for understanding-sam:
Users that are interested in understanding-sam are comparing it to the libraries listed below
- A modern look at the relationship between sharpness and generalization [ICML 2023]☆43Updated last year
- ☆55Updated 4 years ago
- ☆34Updated last year
- Official code for "In Search of Robust Measures of Generalization" (NeurIPS 2020)☆28Updated 4 years ago
- Training vision models with full-batch gradient descent and regularization☆37Updated last year
- ☆38Updated 3 years ago
- Weight-Averaged Sharpness-Aware Minimization (NeurIPS 2022)☆28Updated 2 years ago
- Simple data balancing baselines for worst-group-accuracy benchmarks.☆41Updated last year
- An Investigation of Why Overparameterization Exacerbates Spurious Correlations☆30Updated 4 years ago
- Code to implement the AND-mask and geometric mean to do gradient based optimization, from the paper "Learning explanations that are hard …☆39Updated 4 years ago
- ☆23Updated 2 years ago
- [NeurIPS 2021] A Geometric Analysis of Neural Collapse with Unconstrained Features☆54Updated 2 years ago
- Code for "Just Train Twice: Improving Group Robustness without Training Group Information"☆70Updated 8 months ago
- ☆63Updated last month
- Source code of "What can linearized neural networks actually say about generalization?☆20Updated 3 years ago
- ☆57Updated last year
- Gradient Starvation: A Learning Proclivity in Neural Networks☆61Updated 4 years ago
- Implementation of Effective Sparsification of Neural Networks with Global Sparsity Constraint☆28Updated 2 years ago
- Measurements of Three-Level Hierarchical Structure in the Outliers in the Spectrum of Deepnet Hessians (ICML 2019)☆17Updated 5 years ago
- [ICLR'22] Self-supervised learning optimally robust representations for domain shift.☆23Updated 2 years ago
- ☆62Updated 3 years ago
- ☆34Updated 3 years ago
- On the Loss Landscape of Adversarial Training: Identifying Challenges and How to Overcome Them [NeurIPS 2020]☆35Updated 3 years ago
- ☆22Updated last year
- The Full Spectrum of Deepnet Hessians at Scale: Dynamics with SGD Training and Sample Size☆17Updated 5 years ago
- Host CIFAR-10.2 Data Set☆13Updated 3 years ago
- The Pitfalls of Simplicity Bias in Neural Networks [NeurIPS 2020] (http://arxiv.org/abs/2006.07710v2)☆39Updated last year
- Distilling Model Failures as Directions in Latent Space☆46Updated last year
- Rethinking Bias-Variance Trade-off for Generalization of Neural Networks☆49Updated 3 years ago
- ☆34Updated 6 months ago