MadryLab / pretraining-distribution-shift-robustness
☆14Updated last year
Alternatives and similar repositories for pretraining-distribution-shift-robustness:
Users that are interested in pretraining-distribution-shift-robustness are comparing it to the libraries listed below
- Code for the paper "The Journey, Not the Destination: How Data Guides Diffusion Models"☆22Updated last year
- Code for the paper "Evading Black-box Classifiers Without Breaking Eggs" [SaTML 2024]☆20Updated 11 months ago
- A modern look at the relationship between sharpness and generalization [ICML 2023]☆43Updated last year
- Distilling Model Failures as Directions in Latent Space☆46Updated 2 years ago
- ☆17Updated 2 years ago
- ☆20Updated 7 months ago
- ☆16Updated last year
- Dataset Interfaces: Diagnosing Model Failures Using Controllable Counterfactual Generation☆45Updated 2 years ago
- Host CIFAR-10.2 Data Set☆13Updated 3 years ago
- ☆17Updated last year
- Official repo for the paper "Make Some Noise: Reliable and Efficient Single-Step Adversarial Training" (https://arxiv.org/abs/2202.01181)☆25Updated 2 years ago
- ☆34Updated last year
- OODRobustBench: a Benchmark and Large-Scale Analysis of Adversarial Robustness under Distribution Shift. ICML 2024 and ICLRW-DMLR 2024☆19Updated 7 months ago
- ☆54Updated 4 years ago
- ☆44Updated 2 years ago
- Code relative to "Adversarial robustness against multiple and single $l_p$-threat models via quick fine-tuning of robust classifiers"☆18Updated 2 years ago
- Code for NeurIPS'23 paper "A Bayesian Approach To Analysing Training Data Attribution In Deep Learning"☆15Updated last year
- What do we learn from inverting CLIP models?☆52Updated last year
- ModelDiff: A Framework for Comparing Learning Algorithms☆55Updated last year
- Understanding Rare Spurious Correlations in Neural Network☆12Updated 2 years ago
- Spurious Features Everywhere - Large-Scale Detection of Harmful Spurious Features in ImageNet☆30Updated last year
- Codes for the paper "Optimizing Mode Connectivity via Neuron Alignment" from NeurIPS 2020.☆16Updated 4 years ago
- Code for the paper "Efficient Dataset Distillation using Random Feature Approximation"☆37Updated 2 years ago
- Do input gradients highlight discriminative features? [NeurIPS 2021] (https://arxiv.org/abs/2102.12781)☆13Updated 2 years ago
- Training vision models with full-batch gradient descent and regularization☆37Updated 2 years ago
- Code for the paper "A Light Recipe to Train Robust Vision Transformers" [SaTML 2023]☆53Updated 2 years ago
- ☆28Updated last year
- Code for the ICLR 2022 paper. Salient Imagenet: How to discover spurious features in deep learning?☆39Updated 2 years ago
- ☆108Updated last year
- Sharpness-Aware Minimization Leads to Low-Rank Features [NeurIPS 2023]☆28Updated last year