lydiatliu / delayedimpactLinks
Code for reproducing results in Delayed Impact of Fair Machine Learning (Liu et al 2018)
☆14Updated 2 years ago
Alternatives and similar repositories for delayedimpact
Users that are interested in delayedimpact are comparing it to the libraries listed below
Sorting:
- python tools to check recourse in linear classification☆76Updated 4 years ago
- Code and data for the experiments in "On Fairness and Calibration"☆51Updated 3 years ago
- Comparing fairness-aware machine learning techniques.☆159Updated 2 years ago
- Code to reproduce our paper on probabilistic algorithmic recourse: https://arxiv.org/abs/2006.06831☆36Updated 2 years ago
- Model Agnostic Counterfactual Explanations☆87Updated 2 years ago
- Blind Justice Code for the paper "Blind Justice: Fairness with Encrypted Sensitive Attributes", ICML 2018☆14Updated 6 years ago
- A python library to discover and mitigate biases in machine learning models and datasets☆20Updated last year
- Accompanying source code for "Runaway Feedback Loops in Predictive Policing"☆17Updated 7 years ago
- Supervised Local Modeling for Interpretability☆29Updated 6 years ago
- ☆9Updated 4 years ago
- ☆87Updated 5 years ago
- Python code for training fair logistic regression classifiers.☆189Updated 3 years ago
- This is a benchmark to evaluate machine learning local explanaitons quality generated from any explainer for text and image data☆29Updated 4 years ago
- ☆32Updated last year
- Code and data for decision making under strategic behavior, NeurIPS 2020 & Management Science 2024.☆29Updated last year
- Software and pre-processed data for "Using Embeddings to Correct for Unobserved Confounding in Networks"☆56Updated 2 years ago
- Preprint/draft article/blog on some explainable machine learning misconceptions. WIP!☆28Updated 5 years ago
- This is a public collection of papers related to machine learning model interpretability.☆26Updated 3 years ago
- Guidelines for the responsible use of explainable AI and machine learning.☆17Updated 2 years ago
- (ICML2020) “Counterfactual Cross-Validation: Stable Model Selection Procedure for Causal Inference Models’’☆31Updated 2 years ago
- Code/figures in Right for the Right Reasons☆55Updated 4 years ago
- ☆57Updated 4 years ago
- Code for "Counterfactual Fairness" (NIPS2017)☆54Updated 6 years ago
- Modular Python Toolbox for Fairness, Accountability and Transparency Forensics☆77Updated 2 years ago
- Implementation of provably Rawlsian fair ML algorithms for contextual bandits.☆14Updated 8 years ago
- Using / reproducing DAC from the paper "Disentangled Attribution Curves for Interpreting Random Forests and Boosted Trees"☆28Updated 4 years ago
- Repository of experiments in fairness Machine Learning.☆10Updated last year
- Hands-on tutorial on ML Fairness