JeanKaddour / WASAM
Weight-Averaged Sharpness-Aware Minimization (NeurIPS 2022)
☆28Updated 2 years ago
Alternatives and similar repositories for WASAM:
Users that are interested in WASAM are comparing it to the libraries listed below
- Training vision models with full-batch gradient descent and regularization☆37Updated last year
- ☆34Updated last year
- Towards Understanding Sharpness-Aware Minimization [ICML 2022]☆35Updated 2 years ago
- ☆84Updated 2 years ago
- ☆55Updated 4 years ago
- Implementation of Effective Sparsification of Neural Networks with Global Sparsity Constraint☆28Updated 2 years ago
- Code for the paper "Efficient Dataset Distillation using Random Feature Approximation"☆37Updated last year
- ☆57Updated last year
- Code for the paper "A Light Recipe to Train Robust Vision Transformers" [SaTML 2023]☆52Updated last year
- On the effectiveness of adversarial training against common corruptions [UAI 2022]☆30Updated 2 years ago
- [ICLR 2021] "Robust Overfitting may be mitigated by properly learned smoothening" by Tianlong Chen*, Zhenyu Zhang*, Sijia Liu, Shiyu Chan…☆46Updated 3 years ago
- Code for "Just Train Twice: Improving Group Robustness without Training Group Information"☆70Updated 8 months ago
- Sharpness-Aware Minimization Leads to Low-Rank Features [NeurIPS 2023]☆25Updated last year
- This repository is the official implementation of Dataset Condensation with Contrastive Signals (DCC), accepted at ICML 2022.☆20Updated 2 years ago
- Code for the ICLR 2022 paper. Salient Imagenet: How to discover spurious features in deep learning?☆36Updated 2 years ago
- A modern look at the relationship between sharpness and generalization [ICML 2023]☆43Updated last year
- ☆21Updated 2 years ago
- Code for the paper "Understanding Generalization through Visualizations"☆60Updated 4 years ago
- [ICLR'22] Self-supervised learning optimally robust representations for domain shift.☆23Updated 2 years ago
- On the Importance of Gradients for Detecting Distributional Shifts in the Wild☆54Updated 2 years ago
- [ICLR 2022 official code] Robust Learning Meets Generative Models: Can Proxy Distributions Improve Adversarial Robustness?☆29Updated 2 years ago
- On the Loss Landscape of Adversarial Training: Identifying Challenges and How to Overcome Them [NeurIPS 2020]☆35Updated 3 years ago
- Unofficial implementation of the DeepMind papers "Uncovering the Limits of Adversarial Training against Norm-Bounded Adversarial Examples…☆95Updated 2 years ago
- ☆37Updated 2 years ago
- The Pitfalls of Simplicity Bias in Neural Networks [NeurIPS 2020] (http://arxiv.org/abs/2006.07710v2)☆39Updated last year
- Code for "BayesAdapter: Being Bayesian, Inexpensively and Robustly, via Bayeisan Fine-tuning"☆31Updated 6 months ago
- [NeurIPS 2021] “When does Contrastive Learning Preserve Adversarial Robustness from Pretraining to Finetuning?”☆48Updated 3 years ago
- Code and checkpoints of compressed networks for the paper titled "HYDRA: Pruning Adversarially Robust Neural Networks" (NeurIPS 2020) (ht…☆90Updated 2 years ago
- Code for the paper "MMA Training: Direct Input Space Margin Maximization through Adversarial Training"☆34Updated 4 years ago
- ☆107Updated last year