benedikthoeltgen / DeDUCE
☆8Updated 3 years ago
Alternatives and similar repositories for DeDUCE:
Users that are interested in DeDUCE are comparing it to the libraries listed below
- Code for "Generative causal explanations of black-box classifiers"☆33Updated 4 years ago
- Code accompanying the paper "Preserving Causal Constraints in Counterfactual Explanations for Machine Learning Classifiers"☆30Updated last year
- A lightweight implementation of removal-based explanations for ML models.☆57Updated 3 years ago
- Code for the paper "Model Agnostic Interpretability for Multiple Instance Learning".☆13Updated 3 years ago
- Quantile risk minimization☆24Updated 5 months ago
- Explanation Optimization☆13Updated 4 years ago
- ☆16Updated last year
- Self-Explaining Neural Networks☆39Updated 5 years ago
- Code for our paper☆12Updated 2 years ago
- CME: Concept-based Model Extraction☆12Updated 4 years ago
- ☆16Updated last year
- Source code for the Joint Shapley values: a measure of joint feature importance☆13Updated 3 years ago
- Experiments on meta-learning algorithms to solve few-shot domain adaptation☆10Updated 3 years ago
- Code for the paper "Getting a CLUE: A Method for Explaining Uncertainty Estimates"☆35Updated 9 months ago
- Evidential Calibration☆11Updated 2 years ago
- Code for our ICML '19 paper: Neural Network Attributions: A Causal Perspective.☆51Updated 3 years ago
- ☆11Updated 4 years ago
- Implementation of the models and datasets used in "An Information-theoretic Approach to Distribution Shifts"☆25Updated 3 years ago
- ☆32Updated 6 years ago
- Experiments to reproduce results in Interventional Causal Representation Learning.☆25Updated last year
- Code for "Generating Interpretable Counterfactual Explanations By Implicit Minimisation of Epistemic and Aleatoric Uncertainties"☆18Updated 3 years ago
- Code to reproduce our paper on probabilistic algorithmic recourse: https://arxiv.org/abs/2006.06831☆36Updated 2 years ago
- CEML - Counterfactuals for Explaining Machine Learning models - A Python toolbox☆42Updated 6 months ago
- Official repository for the AAAI-21 paper 'Explainable Models with Consistent Interpretations'☆18Updated 2 years ago
- Python code for NeurIPS 2018 paper "Causal Inference and Mechanism Clustering of A Mixture of Additive Noise Models"☆22Updated 5 years ago
- Self-Explaining Neural Networks☆13Updated last year
- Active and Sample-Efficient Model Evaluation☆24Updated 3 years ago
- A collection of algorithms of counterfactual explanations.☆50Updated 3 years ago
- Distributional Shapley: A Distributional Framework for Data Valuation☆30Updated 9 months ago
- ☆16Updated last year