MadryLab / bias-transferLinks
☆15Updated 3 years ago
Alternatives and similar repositories for bias-transfer
Users that are interested in bias-transfer are comparing it to the libraries listed below
Sorting:
- Official repo for the paper "Make Some Noise: Reliable and Efficient Single-Step Adversarial Training" (https://arxiv.org/abs/2202.01181)☆25Updated 3 years ago
- ☆36Updated 3 years ago
- Distilling Model Failures as Directions in Latent Space☆47Updated 2 years ago
- A modern look at the relationship between sharpness and generalization [ICML 2023]☆43Updated 2 years ago
- DiWA: Diverse Weight Averaging for Out-of-Distribution Generalization☆31Updated 2 years ago
- Certified Patch Robustness via Smoothed Vision Transformers☆42Updated 3 years ago
- Code for paper "Out-of-Domain Robustness via Targeted Augmentations"☆13Updated 2 years ago
- ☆18Updated 3 years ago
- Code relative to "Adversarial robustness against multiple and single $l_p$-threat models via quick fine-tuning of robust classifiers"☆20Updated 2 years ago
- Code for T-MARS data filtering☆35Updated 2 years ago
- Data-free knowledge distillation using Gaussian noise (NeurIPS paper)☆15Updated 2 years ago
- ☆26Updated 3 years ago
- Dataset Interfaces: Diagnosing Model Failures Using Controllable Counterfactual Generation☆45Updated 2 years ago
- Recycling diverse models☆45Updated 2 years ago
- Source code of "Hold me tight! Influence of discriminative features on deep network boundaries"☆21Updated 3 years ago
- Official Implementation for PlugIn Inversion☆16Updated 3 years ago
- [ICML 2023] "Robust Weight Signatures: Gaining Robustness as Easy as Patching Weights?" by Ruisi Cai, Zhenyu Zhang, Zhangyang Wang☆16Updated 2 years ago
- Code for the paper "Evading Black-box Classifiers Without Breaking Eggs" [SaTML 2024]☆21Updated last year
- LISA for ICML 2022☆51Updated 2 years ago
- ☆25Updated 2 years ago
- Code and results accompanying our paper titled RLSbench: Domain Adaptation under Relaxed Label Shift☆35Updated 2 years ago
- ☆58Updated 2 years ago
- Official code for the paper "Does CLIP's Generalization Performance Mainly Stem from High Train-Test Similarity?" (ICLR 2024)☆10Updated last year
- On the Loss Landscape of Adversarial Training: Identifying Challenges and How to Overcome Them [NeurIPS 2020]☆36Updated 4 years ago
- [ICLR'22] Self-supervised learning optimally robust representations for domain shift.☆24Updated 3 years ago
- If CLIP Could Talk: Understanding Vision-Language Model Representations Through Their Preferred Concept Descriptions☆17Updated last year
- ☆34Updated last year
- ☆46Updated 2 years ago
- Code for the paper "Efficient Dataset Distillation using Random Feature Approximation"☆37Updated 2 years ago
- Minimum viable code for the Decodable Information Bottleneck paper. Pytorch Implementation.☆11Updated 4 years ago