harshays / simplicitybiaspitfallsLinks
The Pitfalls of Simplicity Bias in Neural Networks [NeurIPS 2020] (http://arxiv.org/abs/2006.07710v2)
☆41Updated last year
Alternatives and similar repositories for simplicitybiaspitfalls
Users that are interested in simplicitybiaspitfalls are comparing it to the libraries listed below
Sorting:
- ☆55Updated 4 years ago
- An Investigation of Why Overparameterization Exacerbates Spurious Correlations☆31Updated 5 years ago
- ☆108Updated last year
- Code for the paper "Understanding Generalization through Visualizations"☆61Updated 4 years ago
- Code for "Just Train Twice: Improving Group Robustness without Training Group Information"☆72Updated last year
- Do input gradients highlight discriminative features? [NeurIPS 2021] (https://arxiv.org/abs/2102.12781)☆13Updated 2 years ago
- ☆34Updated 3 years ago
- Simple data balancing baselines for worst-group-accuracy benchmarks.☆42Updated last year
- Towards Understanding Sharpness-Aware Minimization [ICML 2022]☆35Updated 3 years ago
- Training vision models with full-batch gradient descent and regularization☆37Updated 2 years ago
- ☆38Updated 4 years ago
- Code for the ICLR 2022 paper. Salient Imagenet: How to discover spurious features in deep learning?☆40Updated 2 years ago
- ☆62Updated 4 years ago
- Measurements of Three-Level Hierarchical Structure in the Outliers in the Spectrum of Deepnet Hessians (ICML 2019)☆17Updated 6 years ago
- Implementation of Invariant Risk Minimization https://arxiv.org/abs/1907.02893☆89Updated 5 years ago
- A modern look at the relationship between sharpness and generalization [ICML 2023]☆43Updated last year
- ☆46Updated 2 years ago
- ☆59Updated 2 years ago
- Original dataset release for CIFAR-10H☆83Updated 4 years ago
- ☆141Updated 4 years ago
- Computing various measures and generalization bounds on convolutional and fully connected networks☆35Updated 6 years ago
- Gradient Starvation: A Learning Proclivity in Neural Networks☆61Updated 4 years ago
- [ICLR'22] Self-supervised learning optimally robust representations for domain shift.☆24Updated 3 years ago
- ☆34Updated last year
- A way to achieve uniform confidence far away from the training data.☆38Updated 4 years ago
- Learning from Failure: Training Debiased Classifier from Biased Classifier (NeurIPS 2020)☆91Updated 4 years ago
- [ICML'20] Multi Steepest Descent (MSD) for robustness against the union of multiple perturbation models.☆26Updated last year
- Weight-Averaged Sharpness-Aware Minimization (NeurIPS 2022)☆28Updated 2 years ago
- ImageNet Testbed, associated with the paper "Measuring Robustness to Natural Distribution Shifts in Image Classification."☆119Updated 2 years ago
- Source code for the paper "Exploiting Excessive Invariance caused by Norm-Bounded Adversarial Robustness"☆25Updated 5 years ago