j-cb / Breaking_Down_OOD_DetectionLinks
☆12Updated 11 months ago
Alternatives and similar repositories for Breaking_Down_OOD_Detection
Users that are interested in Breaking_Down_OOD_Detection are comparing it to the libraries listed below
Sorting:
- Provable Worst Case Guarantees for the Detection of Out-of-Distribution Data☆13Updated 3 years ago
- Do input gradients highlight discriminative features? [NeurIPS 2021] (https://arxiv.org/abs/2102.12781)☆13Updated 3 years ago
- A way to achieve uniform confidence far away from the training data.☆38Updated 4 years ago
- ☆12Updated 3 years ago
- ☆23Updated 3 years ago
- Code for "Just Train Twice: Improving Group Robustness without Training Group Information"☆73Updated last year
- We propose a theoretically motivated method, Adversarial Training with informative Outlier Mining (ATOM), which improves the robustness o…☆57Updated 3 years ago
- Code for the paper: Learning Adversarially Robust Representations via Worst-Case Mutual Information Maximization (https://arxiv.org/abs/2…☆23Updated 5 years ago
- Official implementation for Training Certifiably Robust Neural Networks with Efficient Local Lipschitz Bounds (NeurIPS, 2021).☆25Updated 3 years ago
- Post-processing for fair classification☆16Updated 7 months ago
- ☆28Updated 4 years ago
- Learning from Failure: Training Debiased Classifier from Biased Classifier (NeurIPS 2020)☆94Updated 5 years ago
- Pre-Training Buys Better Robustness and Uncertainty Estimates (ICML 2019)☆100Updated 3 years ago
- Improving Transformation Invariance in Contrastive Representation Learning☆13Updated 4 years ago
- On the Importance of Gradients for Detecting Distributional Shifts in the Wild☆56Updated 3 years ago
- On the effectiveness of adversarial training against common corruptions [UAI 2022]☆30Updated 3 years ago
- Implementation of Contrastive Learning with Adversarial Examples☆29Updated 5 years ago
- Diagnosing Vulnerability of Variational Auto-Encoders to Adversarial Attacks☆13Updated 3 years ago
- [NeurIPS 2021] “When does Contrastive Learning Preserve Adversarial Robustness from Pretraining to Finetuning?”☆48Updated 4 years ago
- Code relative to "Adversarial robustness against multiple and single $l_p$-threat models via quick fine-tuning of robust classifiers"☆19Updated 3 years ago
- An Investigation of Why Overparameterization Exacerbates Spurious Correlations☆30Updated 5 years ago
- ☆36Updated 4 years ago
- ☆39Updated last year
- ☆47Updated 3 years ago
- Official PyTorch implementation of the Fishr regularization for out-of-distribution generalization☆89Updated 3 years ago
- Invariant-feature Subspace Recovery (ISR)☆23Updated 3 years ago
- ☆46Updated 5 years ago
- ☆49Updated 3 years ago
- ☆111Updated 2 years ago
- ☆56Updated 5 years ago