JeanKaddour / WASAM
Weight-Averaged Sharpness-Aware Minimization (NeurIPS 2022)
☆28Updated 2 years ago
Alternatives and similar repositories for WASAM:
Users that are interested in WASAM are comparing it to the libraries listed below
- Implementation of Effective Sparsification of Neural Networks with Global Sparsity Constraint☆31Updated 3 years ago
- ☆34Updated last year
- Training vision models with full-batch gradient descent and regularization☆37Updated 2 years ago
- Towards Understanding Sharpness-Aware Minimization [ICML 2022]☆35Updated 2 years ago
- ☆22Updated 2 years ago
- A modern look at the relationship between sharpness and generalization [ICML 2023]☆43Updated last year
- Code for "Just Train Twice: Improving Group Robustness without Training Group Information"☆71Updated 10 months ago
- ☆86Updated 2 years ago
- [ICLR 2021] "Robust Overfitting may be mitigated by properly learned smoothening" by Tianlong Chen*, Zhenyu Zhang*, Sijia Liu, Shiyu Chan…☆46Updated 3 years ago
- ☆54Updated 4 years ago
- Repo for the paper: "Agree to Disagree: Diversity through Disagreement for Better Transferability"☆35Updated 2 years ago
- Sharpness-Aware Minimization Leads to Low-Rank Features [NeurIPS 2023]☆28Updated last year
- ☆11Updated 2 years ago
- This repository is the official implementation of Dataset Condensation with Contrastive Signals (DCC), accepted at ICML 2022.☆21Updated 2 years ago
- On the effectiveness of adversarial training against common corruptions [UAI 2022]☆30Updated 2 years ago
- ☆57Updated 2 years ago
- [NeurIPS 2021] A Geometric Analysis of Neural Collapse with Unconstrained Features☆55Updated 2 years ago
- Code for the paper "A Light Recipe to Train Robust Vision Transformers" [SaTML 2023]☆52Updated 2 years ago
- (Pytorch) Training ResNets on ImageNet-100 data☆56Updated 3 years ago
- [NeurIPS 2021] “When does Contrastive Learning Preserve Adversarial Robustness from Pretraining to Finetuning?”☆48Updated 3 years ago
- ☆48Updated 2 years ago
- On the Importance of Gradients for Detecting Distributional Shifts in the Wild☆55Updated 2 years ago
- ☆44Updated 2 years ago
- [TPAMI 2023] Low Dimensional Landscape Hypothesis is True: DNNs can be Trained in Tiny Subspaces☆40Updated 2 years ago
- Code for the paper "Efficient Dataset Distillation using Random Feature Approximation"☆37Updated 2 years ago
- Code for the paper "Understanding Generalization through Visualizations"☆60Updated 4 years ago
- Code for the ICLR 2022 paper. Salient Imagenet: How to discover spurious features in deep learning?☆40Updated 2 years ago
- An Investigation of Why Overparameterization Exacerbates Spurious Correlations☆30Updated 4 years ago
- [ICLR'22] Self-supervised learning optimally robust representations for domain shift.☆23Updated 3 years ago
- "Maximum-Entropy Adversarial Data Augmentation for Improved Generalization and Robustness" (NeurIPS 2020).☆50Updated 4 years ago