stanislavfort / exploring_the_limits_of_OOD_detectionLinks
Code to replicate the key results from Exploring the Limits of Out-of-Distribution Detection (https://arxiv.org/abs/2106.03004) by Stanislav Fort, Jie Ren, Balaji Lakshminarayanan, published at NeurIPS 2021.
☆44Updated 3 years ago
Alternatives and similar repositories for exploring_the_limits_of_OOD_detection
Users that are interested in exploring_the_limits_of_OOD_detection are comparing it to the libraries listed below
Sorting:
- MOS: Towards Scaling Out-of-distribution Detection for Large Semantic Space☆95Updated 4 years ago
- ☆46Updated 4 years ago
- This is a code repository for paper OODformer: Out-Of-Distribution Detection Transformer☆40Updated 3 years ago
- Official implementation of paper Gradient Matching for Domain Generalization☆122Updated 3 years ago
- Robust Out-of-distribution Detection in Neural Networks☆73Updated 3 years ago
- [SafeAI'21] Feature Space Singularity for Out-of-Distribution Detection.☆79Updated 4 years ago
- Robustness and adaptation of ImageNet scale models. Pre-Release, stay tuned for updates.☆137Updated 2 years ago
- SSD: A Unified Framework for Self-Supervised Outlier Detection [ICLR 2021]☆137Updated 4 years ago
- PyTorch implementation of the REMIND method from our ECCV-2020 paper "REMIND Your Neural Network to Prevent Catastrophic Forgetting"☆82Updated last year
- ☆57Updated 3 years ago
- Official PyTorch implementation of the Fishr regularization for out-of-distribution generalization☆87Updated 3 years ago
- Confidence-Aware Learning for Deep Neural Networks (ICML2020)☆74Updated 5 years ago
- Learning from Failure: Training Debiased Classifier from Biased Classifier (NeurIPS 2020)☆91Updated 4 years ago
- Whitening for Self-Supervised Representation Learning | Official repository☆131Updated 2 years ago
- (NeurIPS 2020 Workshop on SSL) Official Implementation of "MixCo: Mix-up Contrastive Learning for Visual Representation"☆58Updated 2 years ago
- On the Importance of Gradients for Detecting Distributional Shifts in the Wild☆56Updated 3 years ago
- Official code for ICML 2022: Mitigating Neural Network Overconfidence with Logit Normalization☆152Updated 3 years ago
- [ICML 2021] “ Self-Damaging Contrastive Learning”, Ziyu Jiang, Tianlong Chen, Bobak Mortazavi, Zhangyang Wang☆63Updated 3 years ago
- MetaShift: A Dataset of Datasets for Evaluating Contextual Distribution Shifts and Training Conflicts (ICLR 2022)☆109Updated 3 years ago
- We propose a theoretically motivated method, Adversarial Training with informative Outlier Mining (ATOM), which improves the robustness o…☆57Updated 3 years ago
- ☆108Updated last year
- Code for the paper "Representational Continuity for Unsupervised Continual Learning" (ICLR 22)☆98Updated 2 years ago
- ☆44Updated 3 years ago
- Official repository for the paper "Self-Supervised Models are Continual Learners" (CVPR 2022)☆124Updated 2 years ago
- PixMix: Dreamlike Pictures Comprehensively Improve Safety Measures (CVPR 2022)☆108Updated 3 years ago
- Official PyTorch implementation of MIRO (ECCV 2022)☆87Updated 2 years ago
- A way to achieve uniform confidence far away from the training data.☆38Updated 4 years ago
- Official PyTorch implementation of "Co-Mixup: Saliency Guided Joint Mixup with Supermodular Diversity" (ICLR'21 Oral)☆106Updated 3 years ago
- Generalizing to unseen domains via distribution matching☆72Updated 5 years ago
- [CVPR2019]Learning Not to Learn : An adversarial method to train deep neural networks with biased data☆111Updated 5 years ago