steverab / failing-loudly
Code repository for our paper "Failing Loudly: An Empirical Study of Methods for Detecting Dataset Shift": https://arxiv.org/abs/1810.11953
☆104Updated last year
Alternatives and similar repositories for failing-loudly:
Users that are interested in failing-loudly are comparing it to the libraries listed below
- Calibration library and code for the paper: Verified Uncertainty Calibration. Ananya Kumar, Percy Liang, Tengyu Ma. NeurIPS 2019 (Spotlig…☆148Updated 2 years ago
- A lightweight implementation of removal-based explanations for ML models.☆59Updated 3 years ago
- Repository for code release of paper "Robust Variational Autoencoders for Outlier Detection and Repair of Mixed-Type Data" (AISTATS 2020)☆50Updated 5 years ago
- Causal Explanation (CXPlain) is a method for explaining the predictions of any machine-learning model.☆130Updated 4 years ago
- Model Agnostic Counterfactual Explanations☆88Updated 2 years ago
- Adversarial Attacks on Post Hoc Explanation Techniques (LIME/SHAP)☆82Updated 2 years ago
- Wrapper for a PyTorch classifier which allows it to output prediction sets. The sets are theoretically guaranteed to contain the true cla…☆238Updated 2 years ago
- Drift Detection for your PyTorch Models☆316Updated 2 years ago
- automatic data slicing☆34Updated 3 years ago
- A repo for transfer learning with deep tabular models☆102Updated 2 years ago
- ☆31Updated 3 years ago
- A practical Active Learning python package with a strong focus on experiments.☆51Updated 2 years ago
- Using / reproducing ACD from the paper "Hierarchical interpretations for neural network predictions" 🧠 (ICLR 2019)☆128Updated 3 years ago
- Algorithms for abstention, calibration and domain adaptation to label shift.☆36Updated 4 years ago
- Reusable BatchBALD implementation☆79Updated last year
- Interesting resources related to Explainable Artificial Intelligence, Interpretable Machine Learning, Interactive Machine Learning, Human…☆73Updated 2 years ago
- ☆32Updated 3 years ago
- All about explainable AI, algorithmic fairness and more☆107Updated last year
- Code for the paper "Calibrating Deep Neural Networks using Focal Loss"☆160Updated last year
- A Python package for unwrapping ReLU DNNs☆70Updated last year
- The official implementation of "The Shapley Value of Classifiers in Ensemble Games" (CIKM 2021).☆219Updated last year
- WeightedSHAP: analyzing and improving Shapley based feature attributions (NeurIPS 2022)☆160Updated 2 years ago
- ☆42Updated 4 years ago
- Train Gradient Boosting models that are both high-performance *and* Fair!☆103Updated 9 months ago
- ☆125Updated 3 years ago
- Measuring data importance over ML pipelines using the Shapley value.☆38Updated 2 months ago
- Meaningful Local Explanation for Machine Learning Models☆41Updated 2 years ago
- A python library to discover and mitigate biases in machine learning models and datasets☆19Updated last year
- Weakly Supervised End-to-End Learning (NeurIPS 2021)☆156Updated 2 years ago
- Contrastive Explanation (Foil Trees), developed at TNO/Utrecht University☆45Updated 2 years ago