charmlab / mace
Model Agnostic Counterfactual Explanations
☆87Updated 2 years ago
Alternatives and similar repositories for mace:
Users that are interested in mace are comparing it to the libraries listed below
- python tools to check recourse in linear classification☆75Updated 4 years ago
- CEML - Counterfactuals for Explaining Machine Learning models - A Python toolbox☆43Updated 7 months ago
- Multi-Objective Counterfactuals☆41Updated 2 years ago
- Code to reproduce our paper on probabilistic algorithmic recourse: https://arxiv.org/abs/2006.06831☆36Updated 2 years ago
- Code accompanying the paper "Preserving Causal Constraints in Counterfactual Explanations for Machine Learning Classifiers"☆30Updated last year
- Modular Python Toolbox for Fairness, Accountability and Transparency Forensics☆77Updated last year
- CARLA: A Python Library to Benchmark Algorithmic Recourse and Counterfactual Explanation Algorithms☆286Updated last year
- A lightweight implementation of removal-based explanations for ML models.☆59Updated 3 years ago
- A collection of algorithms of counterfactual explanations.☆50Updated 3 years ago
- All about explainable AI, algorithmic fairness and more☆107Updated last year
- Adversarial Attacks on Post Hoc Explanation Techniques (LIME/SHAP)☆82Updated 2 years ago
- ☆9Updated 4 years ago
- Realistic benchmark for different causal inference methods. The realism comes from fitting generative models to data with an assumed caus…☆72Updated 3 years ago
- Code and data for the experiments in "On Fairness and Calibration"☆50Updated 2 years ago
- Datasets derived from US census data☆255Updated 10 months ago
- LOcal Rule-based Exlanations☆53Updated last year
- Code repository for our paper "Failing Loudly: An Empirical Study of Methods for Detecting Dataset Shift": https://arxiv.org/abs/1810.119…☆103Updated 11 months ago
- For calculating Shapley values via linear regression.☆67Updated 3 years ago
- Fast and incremental explanations for online machine learning models. Works best with the river framework.☆54Updated 2 months ago
- Generalized Optimal Sparse Decision Trees☆62Updated last year
- Fair Empirical Risk Minimization (FERM)☆37Updated 4 years ago
- Causal Explanation (CXPlain) is a method for explaining the predictions of any machine-learning model.☆130Updated 4 years ago
- Code for "Counterfactual Fairness" (NIPS2017)☆52Updated 6 years ago
- Contrastive Explanation (Foil Trees), developed at TNO/Utrecht University☆45Updated 2 years ago
- Supervised Local Modeling for Interpretability☆28Updated 6 years ago
- A Python Package providing two algorithms, DAME and FLAME, for fast and interpretable treatment-control matches of categorical data☆56Updated 9 months ago
- Public home of pycorels, the python binding to CORELS☆77Updated 4 years ago
- A Python package for unwrapping ReLU DNNs☆69Updated last year
- Code for "NODE-GAM: Neural Generalized Additive Model for Interpretable Deep Learning"☆43Updated 2 years ago
- 💊 Comparing causality methods in a fair and just way.☆138Updated 4 years ago