google / TrustScore
To Trust Or Not To Trust A Classifier. A measure of uncertainty for any trained (possibly black-box) classifier which is more effective than the classifier's own implied confidence (e.g. softmax probability for a neural network).
☆173Updated last year
Related projects ⓘ
Alternatives and complementary repositories for TrustScore
- ☆131Updated 5 years ago
- Using / reproducing ACD from the paper "Hierarchical interpretations for neural network predictions" 🧠 (ICLR 2019)☆125Updated 3 years ago
- ☆124Updated 3 years ago
- Codebase for "Deep Learning for Case-based Reasoning through Prototypes: A Neural Network that Explains Its Predictions" (to appear in AA…☆73Updated 6 years ago
- Code for using CDEP from the paper "Interpretations are useful: penalizing explanations to align neural networks with prior knowledge" ht…☆127Updated 3 years ago
- Algorithms for abstention, calibration and domain adaptation to label shift.☆36Updated 3 years ago
- code release for Representer point Selection for Explaining Deep Neural Network in NeurIPS 2018☆67Updated 3 years ago
- Keras implementation for DASP: Deep Approximate Shapley Propagation (ICML 2019)☆60Updated 5 years ago
- Calibration library and code for the paper: Verified Uncertainty Calibration. Ananya Kumar, Percy Liang, Tengyu Ma. NeurIPS 2019 (Spotlig…☆142Updated last year
- Codes for reproducing the contrastive explanation in “Explanations based on the Missing: Towards Contrastive Explanations with Pertinent…☆54Updated 6 years ago
- Combating hidden stratification with GEORGE☆62Updated 3 years ago
- Code/figures in Right for the Right Reasons☆55Updated 3 years ago
- Implementation of Estimating Training Data Influence by Tracing Gradient Descent (NeurIPS 2020)☆219Updated 2 years ago
- Contains code for the NeurIPS 2019 paper "Practical Deep Learning with Bayesian Principles"☆242Updated 4 years ago
- This repository contains the full code for the "Towards fairness in machine learning with adversarial networks" blog post.☆117Updated 3 years ago
- Modular Python Toolbox for Fairness, Accountability and Transparency Forensics☆75Updated last year
- Efficient and Diverse Batch Acquisition for Deep Bayesian Active Learning.☆230Updated 5 months ago
- Calibration of Convolutional Neural Networks☆156Updated last year
- Reusable BatchBALD implementation☆74Updated 8 months ago
- ☆48Updated 4 years ago
- ☆108Updated last year
- Outlier Exposure with Confidence Control for Out-of-Distribution Detection☆69Updated 3 years ago
- Experiments for AAAI anchor paper☆61Updated 6 years ago
- Code repository for our paper "Failing Loudly: An Empirical Study of Methods for Detecting Dataset Shift": https://arxiv.org/abs/1810.119…☆102Updated 7 months ago
- Training Confidence-Calibrated Classifier for Detecting Out-of-Distribution Samples / ICLR 2018☆178Updated 4 years ago
- Python codes for weakly-supervised learning☆122Updated 4 years ago
- Toolkit for building machine learning models that generalize to unseen domains and are robust to privacy and other attacks.☆172Updated last year
- ☆264Updated 4 years ago
- Code for the paper "Calibrating Deep Neural Networks using Focal Loss"☆153Updated 9 months ago
- Supervised Local Modeling for Interpretability☆28Updated 6 years ago