gpleiss / equalized_odds_and_calibrationLinks
Code and data for the experiments in "On Fairness and Calibration"
☆51Updated 3 years ago
Alternatives and similar repositories for equalized_odds_and_calibration
Users that are interested in equalized_odds_and_calibration are comparing it to the libraries listed below
Sorting:
- ☆125Updated 4 years ago
- This repository contains the full code for the "Towards fairness in machine learning with adversarial networks" blog post.☆119Updated 4 years ago
- ☆135Updated 6 years ago
- python tools to check recourse in linear classification☆77Updated 5 years ago
- This is a benchmark to evaluate machine learning local explanaitons quality generated from any explainer for text and image data☆30Updated 4 years ago
- Code/figures in Right for the Right Reasons☆57Updated 5 years ago
- Comparing fairness-aware machine learning techniques.☆160Updated 3 years ago
- Using / reproducing ACD from the paper "Hierarchical interpretations for neural network predictions" 🧠 (ICLR 2019)☆129Updated 4 years ago
- References for Papers at the Intersection of Causality and Fairness☆18Updated 7 years ago
- Blind Justice Code for the paper "Blind Justice: Fairness with Encrypted Sensitive Attributes", ICML 2018☆14Updated 6 years ago
- Python code for training fair logistic regression classifiers.☆192Updated 4 years ago
- ☆43Updated 7 years ago
- Model Agnostic Counterfactual Explanations☆88Updated 3 years ago
- Codebase for "Deep Learning for Case-based Reasoning through Prototypes: A Neural Network that Explains Its Predictions" (to appear in AA…☆76Updated 8 years ago
- Software and pre-processed data for "Using Embeddings to Correct for Unobserved Confounding in Networks"☆57Updated 2 years ago
- Experiments for AAAI anchor paper☆66Updated 7 years ago
- ☆87Updated 5 years ago
- Software and data for "Using Text Embeddings for Causal Inference"☆126Updated 5 years ago
- Supervised Local Modeling for Interpretability☆29Updated 7 years ago
- Causal Explanation (CXPlain) is a method for explaining the predictions of any machine-learning model.☆132Updated 5 years ago
- ☆26Updated 8 years ago
- Non-Parametric Calibration for Classification (AISTATS 2020)☆19Updated 3 years ago
- Interpretation of Neural Network is Fragile☆36Updated last year
- Code for "Neural causal learning from unknown interventions"☆104Updated 5 years ago
- Explaining a black-box using Deep Variational Information Bottleneck Approach☆46Updated 3 years ago
- Data and code related to the paper "Probabilistic matrix factorization for automated machine learning", NIPS, 2018.