tml-epfl / sharpness-vs-generalization
A modern look at the relationship between sharpness and generalization [ICML 2023]
☆43Updated last year
Alternatives and similar repositories for sharpness-vs-generalization:
Users that are interested in sharpness-vs-generalization are comparing it to the libraries listed below
- ☆34Updated last year
- Towards Understanding Sharpness-Aware Minimization [ICML 2022]☆35Updated 2 years ago
- Official code for "In Search of Robust Measures of Generalization" (NeurIPS 2020)☆28Updated 4 years ago
- ☆54Updated 4 years ago
- Distilling Model Failures as Directions in Latent Space☆46Updated 2 years ago
- Code for the paper "The Journey, Not the Destination: How Data Guides Diffusion Models"☆21Updated last year
- ☆17Updated 2 years ago
- Source code of "What can linearized neural networks actually say about generalization?☆20Updated 3 years ago
- Training vision models with full-batch gradient descent and regularization☆37Updated 2 years ago
- ☆44Updated 2 years ago
- Simple data balancing baselines for worst-group-accuracy benchmarks.☆41Updated last year
- An Investigation of Why Overparameterization Exacerbates Spurious Correlations☆30Updated 4 years ago
- A simple and efficient baseline for data attribution☆11Updated last year
- Code relative to "Adversarial robustness against multiple and single $l_p$-threat models via quick fine-tuning of robust classifiers"☆18Updated 2 years ago
- ☆28Updated 7 months ago
- ☆57Updated 2 years ago
- [NeurIPS'22] Official Repository for Characterizing Datapoints via Second-Split Forgetting☆14Updated last year
- Code for the ICLR 2022 paper. Salient Imagenet: How to discover spurious features in deep learning?☆38Updated 2 years ago
- ☆39Updated 2 years ago
- On the effectiveness of adversarial training against common corruptions [UAI 2022]☆30Updated 2 years ago
- Do input gradients highlight discriminative features? [NeurIPS 2021] (https://arxiv.org/abs/2102.12781)☆13Updated 2 years ago
- Code for the paper "Evading Black-box Classifiers Without Breaking Eggs" [SaTML 2024]☆20Updated 10 months ago
- Implementation of Confidence-Calibrated Adversarial Training (CCAT).☆45Updated 4 years ago
- Sharpness-Aware Minimization Leads to Low-Rank Features [NeurIPS 2023]☆27Updated last year
- ☆39Updated 3 years ago
- ☆107Updated last year
- Weight-Averaged Sharpness-Aware Minimization (NeurIPS 2022)☆28Updated 2 years ago
- The Pitfalls of Simplicity Bias in Neural Networks [NeurIPS 2020] (http://arxiv.org/abs/2006.07710v2)☆39Updated last year
- SGD with large step sizes learns sparse features [ICML 2023]☆32Updated last year
- Official repo for the paper "Make Some Noise: Reliable and Efficient Single-Step Adversarial Training" (https://arxiv.org/abs/2202.01181)☆25Updated 2 years ago