gregdeon / spotlightLinks
Implementation of the spotlight: a method for discovering systematic errors in deep learning models
☆11Updated 3 years ago
Alternatives and similar repositories for spotlight
Users that are interested in spotlight are comparing it to the libraries listed below
Sorting:
- (ICML 2021) Mandoline: Model Evaluation under Distribution Shift☆30Updated 4 years ago
- The official repository for "Intermediate Layers Matter in Momentum Contrastive Self Supervised Learning" paper.☆40Updated 3 years ago
- Fast Axiomatic Attribution for Neural Networks (NeurIPS*2021)☆16Updated 2 years ago
- Code repository for the AISTATS 2021 paper "Towards Understanding the Optimal Behaviors of Deep Active Learning Algorithms"☆15Updated 4 years ago
- Fine-grained ImageNet annotations☆29Updated 5 years ago
- Model Patching: Closing the Subgroup Performance Gap with Data Augmentation☆42Updated 4 years ago
- Making Heads or Tails Towards Semantically Consistent Visual Counterfactuals☆30Updated 3 years ago
- Active and Sample-Efficient Model Evaluation☆24Updated 4 months ago
- This repository provides the code for replicating the experiments in the paper "Building One-Shot Semi-supervised (BOSS) Learning up to F…☆36Updated 5 years ago
- Label shift experiments☆17Updated 4 years ago
- ☆11Updated 4 years ago
- Combating hidden stratification with GEORGE☆64Updated 4 years ago
- Code for the CVPR 2021 paper: Understanding Failures of Deep Networks via Robust Feature Extraction☆36Updated 3 years ago
- Code for "Interpretable Image Recognition with Hierarchical Prototypes"☆18Updated 5 years ago
- DiWA: Diverse Weight Averaging for Out-of-Distribution Generalization☆31Updated 2 years ago
- Official code for the paper: "Metadata Archaeology"☆19Updated 2 years ago
- Domain Adaptation☆23Updated 3 years ago
- ☆25Updated 5 years ago
- Implementation of the paper Identifying Mislabeled Data using the Area Under the Margin Ranking: https://arxiv.org/pdf/2001.10528v2.pdf☆21Updated 5 years ago
- ☆96Updated 2 years ago
- B-LRP is the repository for the paper How Much Can I Trust You? — Quantifying Uncertainties in Explaining Neural Networks☆18Updated 3 years ago
- REVISE: A Tool for Measuring and Mitigating Bias in Visual Datasets --- https://arxiv.org/abs/2004.07999☆110Updated 3 years ago
- This code reproduces the results of the paper, "Measuring Data Leakage in Machine-Learning Models with Fisher Information"☆50Updated 4 years ago
- Beta Shapley: a Unified and Noise-reduced Data Valuation Framework for Machine Learning (AISTATS 2022 Oral)☆41Updated 2 years ago
- ☆36Updated 2 years ago
- Robust Contrastive Learning Using Negative Samples with Diminished Semantics (NeurIPS 2021)☆39Updated 3 years ago
- [CVPR 2022] Official code for the paper: "A Stitch in Time Saves Nine: A Train-Time Regularizing Loss for Improved Neural Network Calibra…☆33Updated 2 years ago
- ☆34Updated 3 months ago
- A Domain-Agnostic Benchmark for Self-Supervised Learning☆107Updated 2 years ago
- Interactive Weak Supervision: Learning Useful Heuristics for Data Labeling☆31Updated 4 years ago