ustunb / actionable-recourse
python tools to check recourse in linear classification
☆75Updated 4 years ago
Alternatives and similar repositories for actionable-recourse:
Users that are interested in actionable-recourse are comparing it to the libraries listed below
- Model Agnostic Counterfactual Explanations☆85Updated 2 years ago
- Comparing fairness-aware machine learning techniques.☆160Updated 2 years ago
- ☆9Updated 4 years ago
- Code and data for the experiments in "On Fairness and Calibration"☆50Updated 2 years ago
- Modular Python Toolbox for Fairness, Accountability and Transparency Forensics☆77Updated last year
- Python code for training fair logistic regression classifiers.☆189Updated 3 years ago
- Multi-Objective Counterfactuals☆41Updated 2 years ago
- Code to reproduce our paper on probabilistic algorithmic recourse: https://arxiv.org/abs/2006.06831☆36Updated 2 years ago
- Code for "Counterfactual Fairness" (NIPS2017)☆52Updated 6 years ago
- Hands-on tutorial on ML Fairness☆70Updated last year
- CAIPI turns LIMEs into trust!☆12Updated 4 years ago
- A python library to discover and mitigate biases in machine learning models and datasets☆19Updated last year
- Datasets derived from US census data☆252Updated 9 months ago
- Blind Justice Code for the paper "Blind Justice: Fairness with Encrypted Sensitive Attributes", ICML 2018☆14Updated 5 years ago
- ☆53Updated 5 years ago
- A Python package for unwrapping ReLU DNNs☆70Updated last year
- Code/figures in Right for the Right Reasons☆55Updated 4 years ago
- Public home of pycorels, the python binding to CORELS☆77Updated 4 years ago
- This repository contains the full code for the "Towards fairness in machine learning with adversarial networks" blog post.☆117Updated 3 years ago
- CEML - Counterfactuals for Explaining Machine Learning models - A Python toolbox☆42Updated 7 months ago
- All about explainable AI, algorithmic fairness and more☆107Updated last year
- A library that implements fairness-aware machine learning algorithms☆125Updated 4 years ago
- repository for R library "sbrlmod"☆25Updated 9 months ago
- Adversarial Attacks on Post Hoc Explanation Techniques (LIME/SHAP)☆82Updated 2 years ago
- This is a benchmark to evaluate machine learning local explanaitons quality generated from any explainer for text and image data☆30Updated 3 years ago
- Achieve error-rate fairness between societal groups for any score-based classifier.☆16Updated 10 months ago
- Repository of experiments in fairness Machine Learning.☆9Updated 8 months ago
- 💊 Comparing causality methods in a fair and just way.☆138Updated 4 years ago
- Supervised Local Modeling for Interpretability☆28Updated 6 years ago
- ☆313Updated last year