MadryLab / journey-TRAK
Code for the paper "The Journey, Not the Destination: How Data Guides Diffusion Models"
☆22Updated last year
Alternatives and similar repositories for journey-TRAK:
Users that are interested in journey-TRAK are comparing it to the libraries listed below
- A modern look at the relationship between sharpness and generalization [ICML 2023]☆43Updated last year
- A simple and efficient baseline for data attribution☆11Updated last year
- ☆30Updated 9 months ago
- ☆14Updated last year
- ☆34Updated last year
- ☆28Updated last year
- ☆55Updated 4 years ago
- Code for the ICLR 2022 paper. Salient Imagenet: How to discover spurious features in deep learning?☆40Updated 2 years ago
- Intriguing Properties of Data Attribution on Diffusion Models (ICLR 2024)☆28Updated last year
- Towards Understanding Sharpness-Aware Minimization [ICML 2022]☆35Updated 2 years ago
- Host CIFAR-10.2 Data Set☆13Updated 3 years ago
- Training vision models with full-batch gradient descent and regularization☆37Updated 2 years ago
- Dataset Interfaces: Diagnosing Model Failures Using Controllable Counterfactual Generation☆45Updated 2 years ago
- Distilling Model Failures as Directions in Latent Space☆46Updated 2 years ago
- Code for the paper "Evading Black-box Classifiers Without Breaking Eggs" [SaTML 2024]☆20Updated last year
- Data for "Datamodels: Predicting Predictions with Training Data"☆97Updated last year
- ☆33Updated 4 months ago
- Do input gradients highlight discriminative features? [NeurIPS 2021] (https://arxiv.org/abs/2102.12781)☆13Updated 2 years ago
- Source code of "What can linearized neural networks actually say about generalization?☆20Updated 3 years ago
- ☆21Updated 9 months ago
- ☆40Updated 2 years ago
- Pytorch Datasets for Easy-To-Hard☆27Updated 3 months ago
- The Pitfalls of Simplicity Bias in Neural Networks [NeurIPS 2020] (http://arxiv.org/abs/2006.07710v2)☆39Updated last year
- ☆60Updated 3 years ago
- ☆67Updated 4 months ago
- Code for NeurIPS'23 paper "A Bayesian Approach To Analysing Training Data Attribution In Deep Learning"☆17Updated last year
- ☆17Updated 2 years ago
- Code relative to "Adversarial robustness against multiple and single $l_p$-threat models via quick fine-tuning of robust classifiers"☆18Updated 2 years ago
- ☆38Updated 3 years ago
- What do we learn from inverting CLIP models?☆54Updated last year