d909b / cxplainLinks
Causal Explanation (CXPlain) is a method for explaining the predictions of any machine-learning model.
β131Updated 4 years ago
Alternatives and similar repositories for cxplain
Users that are interested in cxplain are comparing it to the libraries listed below
Sorting:
- β124Updated 4 years ago
- Using / reproducing ACD from the paper "Hierarchical interpretations for neural network predictions" π§ (ICLR 2019)β129Updated 3 years ago
- Codebase for "Deep Learning for Case-based Reasoning through Prototypes: A Neural Network that Explains Its Predictions" (to appear in AAβ¦β75Updated 7 years ago
- Code for using CDEP from the paper "Interpretations are useful: penalizing explanations to align neural networks with prior knowledge" htβ¦β128Updated 4 years ago
- Repository for Deep Structural Causal Models for Tractable Counterfactual Inferenceβ286Updated 2 years ago
- Tools for training explainable models using attribution priors.β124Updated 4 years ago
- Calibration library and code for the paper: Verified Uncertainty Calibration. Ananya Kumar, Percy Liang, Tengyu Ma. NeurIPS 2019 (Spotligβ¦β150Updated 2 years ago
- A lightweight implementation of removal-based explanations for ML models.β58Updated 4 years ago
- Codebase for INVASE: Instance-wise Variable Selection - 2019 ICLRβ62Updated 5 years ago
- β91Updated 2 years ago
- Code for our ICML '19 paper: Neural Network Attributions: A Causal Perspective.β51Updated 3 years ago
- Code for "Neural causal learning from unknown interventions"β104Updated 5 years ago
- General purpose library for BNNs, and implementation of OC-BNNs in our 2020 NeurIPS paper.β38Updated 3 years ago
- Code and data for the experiments in "On Fairness and Calibration"β51Updated 3 years ago
- Code for ICLR 2020 paper: "Estimating counterfactual treatment outcomes over time through adversarially balanced representations" by I. Bβ¦β59Updated last year
- Neural Additive Models (Google Research)β71Updated 3 years ago
- Wrapper for a PyTorch classifier which allows it to output prediction sets. The sets are theoretically guaranteed to contain the true claβ¦β244Updated 2 years ago
- Code to reproduce our paper on probabilistic algorithmic recourse: https://arxiv.org/abs/2006.06831β36Updated 2 years ago
- List of relevant resources for machine learning from explanatory supervisionβ158Updated 3 weeks ago
- Code for "Generative causal explanations of black-box classifiers"β34Updated 4 years ago
- Keras implementation for DASP: Deep Approximate Shapley Propagation (ICML 2019)β61Updated 6 years ago
- Model Agnostic Counterfactual Explanationsβ87Updated 2 years ago
- Deep Neural Decision Treesβ162Updated 3 years ago
- An implementation of the Deep Neural Decision Forests in PyTorchβ163Updated 6 years ago
- Implementation of the paper "Shapley Explanation Networks"β88Updated 4 years ago
- Code for the Structural Agnostic Model (https://arxiv.org/abs/1803.04929)β53Updated 4 years ago
- Realistic benchmark for different causal inference methods. The realism comes from fitting generative models to data with an assumed causβ¦β77Updated 4 years ago
- An amortized approach for calculating local Shapley value explanationsβ98Updated last year
- Code repository for our paper "Failing Loudly: An Empirical Study of Methods for Detecting Dataset Shift": https://arxiv.org/abs/1810.119β¦β105Updated last year
- Package for causal inference in graphs and in the pairwise settings. Tools for graph structure recovery and dependencies are included.β31Updated 5 years ago