tml-epfl / understanding-sam
Towards Understanding Sharpness-Aware Minimization [ICML 2022]
☆35Updated 2 years ago
Alternatives and similar repositories for understanding-sam:
Users that are interested in understanding-sam are comparing it to the libraries listed below
- A modern look at the relationship between sharpness and generalization [ICML 2023]☆43Updated last year
- ☆34Updated last year
- ☆54Updated 4 years ago
- Training vision models with full-batch gradient descent and regularization☆37Updated 2 years ago
- Source code of "What can linearized neural networks actually say about generalization?☆20Updated 3 years ago
- Simple data balancing baselines for worst-group-accuracy benchmarks.☆42Updated last year
- Code for "Just Train Twice: Improving Group Robustness without Training Group Information"☆71Updated 11 months ago
- The Pitfalls of Simplicity Bias in Neural Networks [NeurIPS 2020] (http://arxiv.org/abs/2006.07710v2)☆39Updated last year
- Official code for "In Search of Robust Measures of Generalization" (NeurIPS 2020)☆28Updated 4 years ago
- Sharpness-Aware Minimization Leads to Low-Rank Features [NeurIPS 2023]☆28Updated last year
- Measurements of Three-Level Hierarchical Structure in the Outliers in the Spectrum of Deepnet Hessians (ICML 2019)☆17Updated 5 years ago
- Code to implement the AND-mask and geometric mean to do gradient based optimization, from the paper "Learning explanations that are hard …☆39Updated 4 years ago
- Weight-Averaged Sharpness-Aware Minimization (NeurIPS 2022)☆28Updated 2 years ago
- ☆38Updated 3 years ago
- ☆67Updated 4 months ago
- ☆22Updated 2 years ago
- ☆23Updated 2 years ago
- ☆17Updated 2 years ago
- [ICLR'22] Self-supervised learning optimally robust representations for domain shift.☆23Updated 3 years ago
- Implementation of Effective Sparsification of Neural Networks with Global Sparsity Constraint☆31Updated 3 years ago
- [NeurIPS 2021] A Geometric Analysis of Neural Collapse with Unconstrained Features☆56Updated 2 years ago
- Computing various measures and generalization bounds on convolutional and fully connected networks☆35Updated 6 years ago
- ☆34Updated 4 years ago
- Understanding Rare Spurious Correlations in Neural Network☆12Updated 2 years ago
- Code for Sanity-Checking Pruning Methods: Random Tickets can Win the Jackpot☆42Updated 4 years ago
- ☆34Updated 3 years ago
- ☆62Updated 3 years ago
- Do input gradients highlight discriminative features? [NeurIPS 2021] (https://arxiv.org/abs/2102.12781)☆13Updated 2 years ago
- ☆8Updated 4 years ago
- Distilling Model Failures as Directions in Latent Space☆46Updated 2 years ago