ustunb / actionable-recourseLinks
python tools to check recourse in linear classification
☆76Updated 4 years ago
Alternatives and similar repositories for actionable-recourse
Users that are interested in actionable-recourse are comparing it to the libraries listed below
Sorting:
- Model Agnostic Counterfactual Explanations☆87Updated 2 years ago
- Code and data for the experiments in "On Fairness and Calibration"☆51Updated 3 years ago
- Comparing fairness-aware machine learning techniques.☆159Updated 2 years ago
- ☆9Updated 4 years ago
- Python code for training fair logistic regression classifiers.☆189Updated 3 years ago
- Code for reproducing results in Delayed Impact of Fair Machine Learning (Liu et al 2018)☆14Updated 2 years ago
- Modular Python Toolbox for Fairness, Accountability and Transparency Forensics☆77Updated 2 years ago
- Code/figures in Right for the Right Reasons☆55Updated 4 years ago
- Supervised Local Modeling for Interpretability☆29Updated 6 years ago
- Code to reproduce our paper on probabilistic algorithmic recourse: https://arxiv.org/abs/2006.06831☆36Updated 2 years ago
- Code for "Counterfactual Fairness" (NIPS2017)☆54Updated 6 years ago
- All about explainable AI, algorithmic fairness and more☆109Updated last year
- ☆32Updated last year
- Datasets derived from US census data☆263Updated last year
- Multi-Objective Counterfactuals☆41Updated 2 years ago
- ☆134Updated 5 years ago
- A python library to discover and mitigate biases in machine learning models and datasets☆20Updated last year
- Adversarial Attacks on Post Hoc Explanation Techniques (LIME/SHAP)☆82Updated 2 years ago
- A lightweight implementation of removal-based explanations for ML models.☆59Updated 3 years ago
- Hands-on tutorial on ML Fairness☆72Updated last year
- LOcal Rule-based Exlanations☆51Updated last year
- ☆57Updated 4 years ago
- simple customizable risk scores in python☆137Updated last year
- ☆124Updated 4 years ago
- Research code for auditing and exploring black box machine-learning models.☆132Updated 2 years ago
- library for fair auditing and learning of classifiers with respect to rich subgroup fairness.☆32Updated 5 years ago
- 💊 Comparing causality methods in a fair and just way.☆139Updated 5 years ago
- A library that implements fairness-aware machine learning algorithms☆125Updated 4 years ago
- Contrastive Explanation (Foil Trees), developed at TNO/Utrecht University☆44Updated 2 years ago
- Achieve error-rate fairness between societal groups for any score-based classifier.☆18Updated last year