jfc43 / informative-outlier-mining
We propose a theoretically motivated method, Adversarial Training with informative Outlier Mining (ATOM), which improves the robustness of OOD detection to various types of adversarial OOD inputs and establishes state-of-the-art performance.
☆56Updated 2 years ago
Related projects ⓘ
Alternatives and complementary repositories for informative-outlier-mining
- Official implementation of paper Gradient Matching for Domain Generalization☆116Updated 2 years ago
- A way to achieve uniform confidence far away from the training data.☆36Updated 3 years ago
- Code for "Just Train Twice: Improving Group Robustness without Training Group Information"☆67Updated 6 months ago
- Learning from Failure: Training Debiased Classifier from Biased Classifier (NeurIPS 2020)☆89Updated 4 years ago
- On the Importance of Gradients for Detecting Distributional Shifts in the Wild☆53Updated 2 years ago
- Code for Environment Inference for Invariant Learning (ICML 2021 Paper)☆49Updated 3 years ago
- Official PyTorch implementation of the Fishr regularization for out-of-distribution generalization☆83Updated 2 years ago
- Coresets via Bilevel Optimization☆65Updated 4 years ago
- Repo for the paper: "Agree to Disagree: Diversity through Disagreement for Better Transferability"☆35Updated 2 years ago
- Robust Out-of-distribution Detection in Neural Networks☆72Updated 2 years ago
- ☆46Updated 3 years ago
- Sinkhorn Label Allocation is a label assignment method for semi-supervised self-training algorithms. The SLA algorithm is described in fu…☆53Updated 3 years ago
- ☆105Updated last year
- ☆58Updated 2 years ago
- Simple data balancing baselines for worst-group-accuracy benchmarks.☆40Updated last year
- ☆65Updated 4 years ago
- ☆16Updated 2 years ago
- ☆21Updated 2 years ago
- PyTorch implementation of POEM (Out-of-distribution detection with posterior sampling), ICML 2022☆28Updated last year
- Pre-Training Buys Better Robustness and Uncertainty Estimates (ICML 2019)☆99Updated 2 years ago
- ☆48Updated 2 years ago
- The Pitfalls of Simplicity Bias in Neural Networks [NeurIPS 2020] (http://arxiv.org/abs/2006.07710v2)☆39Updated 10 months ago
- This is the code for the paper Bayesian Invariant Risk Minmization of CVPR 2022.☆42Updated last year
- "Maximum-Entropy Adversarial Data Augmentation for Improved Generalization and Robustness" (NeurIPS 2020).☆50Updated 3 years ago
- ☆65Updated 4 years ago
- Example implementation for the paper: (ICLR Oral) Learning Robust Representations by Projecting Superficial Statistics Out☆27Updated 3 years ago
- LISA for ICML 2022☆47Updated last year
- Invariant-feature Subspace Recovery (ISR)☆23Updated 2 years ago
- Self-Supervised Learning with Data Augmentations Provably Isolates Content from Style☆48Updated 2 years ago
- ☆27Updated 3 years ago