stanislavfort / exploring_the_limits_of_OOD_detectionLinks
Code to replicate the key results from Exploring the Limits of Out-of-Distribution Detection (https://arxiv.org/abs/2106.03004) by Stanislav Fort, Jie Ren, Balaji Lakshminarayanan, published at NeurIPS 2021.
☆44Updated 4 years ago
Alternatives and similar repositories for exploring_the_limits_of_OOD_detection
Users that are interested in exploring_the_limits_of_OOD_detection are comparing it to the libraries listed below
Sorting:
- SSD: A Unified Framework for Self-Supervised Outlier Detection [ICLR 2021]☆138Updated 4 years ago
- ☆57Updated 4 years ago
- ☆46Updated 5 years ago
- Robust Out-of-distribution Detection in Neural Networks☆73Updated 3 years ago
- Official implementation of paper Gradient Matching for Domain Generalization☆123Updated 4 years ago
- Robustness and adaptation of ImageNet scale models. Pre-Release, stay tuned for updates.☆137Updated 2 years ago
- MOS: Towards Scaling Out-of-distribution Detection for Large Semantic Space☆98Updated 4 years ago
- This is a code repository for paper OODformer: Out-Of-Distribution Detection Transformer☆41Updated 4 years ago
- [SafeAI'21] Feature Space Singularity for Out-of-Distribution Detection.☆79Updated 4 years ago
- ☆68Updated 6 years ago
- PyTorch implementation of the REMIND method from our ECCV-2020 paper "REMIND Your Neural Network to Prevent Catastrophic Forgetting"☆83Updated 2 years ago
- Official PyTorch implementation of the Fishr regularization for out-of-distribution generalization☆89Updated 3 years ago
- Metrics for out-of-distribution (OOD) detection performance evaluation☆50Updated last year
- On the Importance of Gradients for Detecting Distributional Shifts in the Wild☆56Updated 3 years ago
- ☆111Updated 2 years ago
- Official code for ICML 2022: Mitigating Neural Network Overconfidence with Logit Normalization☆154Updated 3 years ago
- Whitening for Self-Supervised Representation Learning | Official repository☆133Updated 2 years ago
- Code for Deterministic Neural Networks with Appropriate Inductive Biases Capture Epistemic and Aleatoric Uncertainty☆146Updated 2 years ago
- Confidence-Aware Learning for Deep Neural Networks (ICML2020)☆74Updated 5 years ago
- A way to achieve uniform confidence far away from the training data.☆38Updated 4 years ago
- We propose a theoretically motivated method, Adversarial Training with informative Outlier Mining (ATOM), which improves the robustness o…☆57Updated 3 years ago
- Last-layer Laplace approximation code examples☆83Updated 4 years ago
- ☆47Updated 2 years ago
- Gradient Starvation: A Learning Proclivity in Neural Networks☆61Updated 5 years ago
- Generalizing to unseen domains via distribution matching☆73Updated 5 years ago
- Code release for paper Extremely Simple Activation Shaping for Out-of-Distribution Detection☆55Updated last year
- Code and data for the paper "In or Out? Fixing ImageNet Out-of-Distribution Detection Evaluation"☆26Updated 2 years ago
- Code used in "Understanding Dimensional Collapse in Contrastive Self-supervised Learning" paper.☆79Updated 3 years ago
- Code for the paper "Calibrating Deep Neural Networks using Focal Loss"☆161Updated 2 years ago
- MetaShift: A Dataset of Datasets for Evaluating Contextual Distribution Shifts and Training Conflicts (ICLR 2022)☆109Updated 3 years ago