gifford-lab / overinterpretationLinks
Code for Overinterpretation paper
☆19Updated last year
Alternatives and similar repositories for overinterpretation
Users that are interested in overinterpretation are comparing it to the libraries listed below
Sorting:
- PRIME: A Few Primitives Can Boost Robustness to Common Corruptions☆42Updated 2 years ago
- Code for Active Learning at The ImageNet Scale. This repository implements many popular active learning algorithms and allows training wi…☆53Updated 3 years ago
- Pytorch implementation for "The Surprising Positive Knowledge Transfer in Continual 3D Object Shape Reconstruction"☆33Updated 2 years ago
- Fine-grained ImageNet annotations☆29Updated 5 years ago
- ☆46Updated 4 years ago
- DiWA: Diverse Weight Averaging for Out-of-Distribution Generalization☆31Updated 2 years ago
- Code for CVPR2021 paper: MOOD: Multi-level Out-of-distribution Detection☆38Updated last year
- Code for the CVPR 2021 paper: Understanding Failures of Deep Networks via Robust Feature Extraction☆36Updated 3 years ago
- Advances in Neural Information Processing Systems (NeurIPS 2021)☆22Updated 2 years ago
- ☆36Updated 2 years ago
- Github for the conference paper GLOD-Gaussian Likelihood OOD detector☆16Updated 3 years ago
- Gradient Starvation: A Learning Proclivity in Neural Networks☆61Updated 4 years ago
- Code for paper "Can contrastive learning avoid shortcut solutions?" NeurIPS 2021.☆47Updated 3 years ago
- Recycling diverse models☆44Updated 2 years ago
- ☆45Updated 2 years ago
- ☆38Updated 3 years ago
- Official repo for the paper "Make Some Noise: Reliable and Efficient Single-Step Adversarial Training" (https://arxiv.org/abs/2202.01181)☆25Updated 2 years ago
- ☆58Updated 3 years ago
- MetaShift: A Dataset of Datasets for Evaluating Contextual Distribution Shifts and Training Conflicts (ICLR 2022)☆109Updated 2 years ago
- Official codebase of the "Rehearsal revealed:The limits and merits of revisiting samples in continual learning" paper.☆27Updated 3 years ago
- Codebase used in the paper "Foundational Models for Continual Learning: An Empirical Study of Latent Replay".☆30Updated 2 years ago
- ☆95Updated 2 years ago
- Model Patching: Closing the Subgroup Performance Gap with Data Augmentation☆42Updated 4 years ago
- ☆19Updated 3 years ago
- Linear Mode Connectivity in Multitask and Continual Learning: https://arxiv.org/abs/2010.04495☆11Updated 4 years ago
- Repository for the paper Do SSL Models Have Déjà Vu? A Case of Unintended Memorization in Self-supervised Learning☆36Updated 2 years ago
- ☆35Updated last year
- ☆55Updated 4 years ago
- Pre-Training Buys Better Robustness and Uncertainty Estimates (ICML 2019)☆100Updated 3 years ago
- ☆26Updated 3 years ago