d909b / cxplain
Causal Explanation (CXPlain) is a method for explaining the predictions of any machine-learning model.
β130Updated 4 years ago
Alternatives and similar repositories for cxplain:
Users that are interested in cxplain are comparing it to the libraries listed below
- β124Updated 3 years ago
- Using / reproducing ACD from the paper "Hierarchical interpretations for neural network predictions" π§ (ICLR 2019)β128Updated 3 years ago
- Repository for Deep Structural Causal Models for Tractable Counterfactual Inferenceβ273Updated last year
- β88Updated last year
- A lightweight implementation of removal-based explanations for ML models.β57Updated 3 years ago
- Codebase for "Deep Learning for Case-based Reasoning through Prototypes: A Neural Network that Explains Its Predictions" (to appear in AAβ¦β73Updated 7 years ago
- Code for ICLR 2020 paper: "Estimating counterfactual treatment outcomes over time through adversarially balanced representations" by I. Bβ¦β58Updated 9 months ago
- Code for the paper: Amortized Causal Discovery: Learning to Infer Causal Graphs from Time-Series Dataβ206Updated 2 years ago
- Code for using CDEP from the paper "Interpretations are useful: penalizing explanations to align neural networks with prior knowledge" htβ¦β128Updated 3 years ago
- Package for causal inference in graphs and in the pairwise settings. Tools for graph structure recovery and dependencies are included.β30Updated 5 years ago
- Tools for training explainable models using attribution priors.β120Updated 3 years ago
- Keras implementation for DASP: Deep Approximate Shapley Propagation (ICML 2019)β60Updated 5 years ago
- Code for our ICML '19 paper: Neural Network Attributions: A Causal Perspective.β51Updated 3 years ago
- References for Papers at the Intersection of Causality and Fairnessβ18Updated 6 years ago
- ββ Perfect Match is a simple method for learning representations for counterfactual inference with neural networks.β125Updated last year
- Implementation of the paper "Shapley Explanation Networks"β86Updated 4 years ago
- Codebase for INVASE: Instance-wise Variable Selection - 2019 ICLRβ60Updated 4 years ago
- GRACE: Generating Concise and Informative Contrastive Sample to Explain Neural Network Modelβs Prediction. Thai Le, Suhang Wang, Dongwon β¦β21Updated 4 years ago
- β132Updated 5 years ago
- Realistic benchmark for different causal inference methods. The realism comes from fitting generative models to data with an assumed causβ¦β71Updated 3 years ago
- Wrapper for a PyTorch classifier which allows it to output prediction sets. The sets are theoretically guaranteed to contain the true claβ¦β235Updated last year
- Code for "Generative causal explanations of black-box classifiers"β33Updated 4 years ago
- MLSS2019 Tutorial on Bayesian Deep Learningβ92Updated 5 years ago
- Detecting Statistical Interactions from Neural Network Weightsβ47Updated 4 years ago
- An amortized approach for calculating local Shapley value explanationsβ94Updated last year
- β37Updated 6 years ago
- Code for the Structural Agnostic Model (https://arxiv.org/abs/1803.04929)β53Updated 4 years ago
- Calibration library and code for the paper: Verified Uncertainty Calibration. Ananya Kumar, Percy Liang, Tengyu Ma. NeurIPS 2019 (Spotligβ¦β143Updated 2 years ago
- Model Agnostic Counterfactual Explanationsβ87Updated 2 years ago
- Neural Additive Models (Google Research)β68Updated 3 years ago