Piyushi-0 / ACELinks
Code for our ICML '19 paper: Neural Network Attributions: A Causal Perspective.
☆51Updated 3 years ago
Alternatives and similar repositories for ACE
Users that are interested in ACE are comparing it to the libraries listed below
Sorting:
- ☆32Updated 6 years ago
- Code for "Generative causal explanations of black-box classifiers"☆34Updated 4 years ago
- Code to reproduce our paper on probabilistic algorithmic recourse: https://arxiv.org/abs/2006.06831☆36Updated 2 years ago
- Code accompanying the paper "Preserving Causal Constraints in Counterfactual Explanations for Machine Learning Classifiers"☆31Updated 2 years ago
- Code for the Structural Agnostic Model (https://arxiv.org/abs/1803.04929)☆52Updated 4 years ago
- Self-Explaining Neural Networks☆13Updated last year
- Explaining Image Classifiers by Counterfactual Generation☆28Updated 3 years ago
- A lightweight implementation of removal-based explanations for ML models.☆59Updated 3 years ago
- General purpose library for BNNs, and implementation of OC-BNNs in our 2020 NeurIPS paper.☆38Updated 3 years ago
- Classifier Conditional Independence Test: A CI test that uses a binary classifier (XGBoost) for CI testing☆45Updated last year
- Package for causal inference in graphs and in the pairwise settings. Tools for graph structure recovery and dependencies are included.☆31Updated 5 years ago
- Self-Explaining Neural Networks☆42Updated 5 years ago
- ☆29Updated 6 years ago
- Code for paper EDDI: Efficient Dynamic Discovery of High-Value Information with Partial VAE☆40Updated 2 years ago
- References for Papers at the Intersection of Causality and Fairness☆18Updated 6 years ago
- An Empirical Study of Invariant Risk Minimization☆27Updated 4 years ago
- ☆124Updated 4 years ago
- ☆39Updated 6 years ago
- ☆44Updated 3 years ago
- ☆65Updated 11 months ago
- Implementation of the models and datasets used in "An Information-theoretic Approach to Distribution Shifts"☆25Updated 3 years ago
- Active and Sample-Efficient Model Evaluation☆24Updated last month
- 🤖🤖 Attentive Mixtures of Experts (AMEs) are neural network models that learn to output both accurate predictions and estimates of featu…☆42Updated 2 years ago
- Causal Explanation (CXPlain) is a method for explaining the predictions of any machine-learning model.☆131Updated 4 years ago
- Code/figures in Right for the Right Reasons☆55Updated 4 years ago
- Github for the NIPS 2020 paper "Learning outside the black-box: at the pursuit of interpretable models"☆15Updated 2 years ago
- Repository for code release of paper "Robust Variational Autoencoders for Outlier Detection and Repair of Mixed-Type Data" (AISTATS 2020)☆50Updated 5 years ago
- Feature Interaction Interpretability via Interaction Detection☆34Updated 2 years ago
- Non-Parametric Calibration for Classification (AISTATS 2020)☆19Updated 3 years ago
- Code for "Neural causal learning from unknown interventions"☆103Updated 4 years ago