sumonbis / FairPreprocessingLinks
This repository contains the artifacts accompanied by the paper "Fair Preprocessing"
☆13Updated 4 years ago
Alternatives and similar repositories for FairPreprocessing
Users that are interested in FairPreprocessing are comparing it to the libraries listed below
Sorting:
- ☆33Updated 3 years ago
- A toolbox for differentially private data generation☆132Updated 2 years ago
- Modular Python Toolbox for Fairness, Accountability and Transparency Forensics☆77Updated 2 years ago
- Adversarial Attacks on Post Hoc Explanation Techniques (LIME/SHAP)☆83Updated 2 years ago
- The official implementation of "The Shapley Value of Classifiers in Ensemble Games" (CIKM 2021).☆220Updated 2 years ago
- ☆124Updated 4 years ago
- Comparing fairness-aware machine learning techniques.☆159Updated 2 years ago
- Model Agnostic Counterfactual Explanations☆87Updated 2 years ago
- Python code for training fair logistic regression classifiers.☆189Updated 3 years ago
- Keras implementation for DASP: Deep Approximate Shapley Propagation (ICML 2019)☆61Updated 6 years ago
- ⚖️ Code for the paper "Ethical Adversaries: Towards Mitigating Unfairness with Adversarial Machine Learning".☆11Updated 2 years ago
- Code and data for the experiments in "On Fairness and Calibration"☆51Updated 3 years ago
- Distributional Shapley: A Distributional Framework for Data Valuation☆30Updated last year
- [NeurIPS 2019] H. Chen*, H. Zhang*, S. Si, Y. Li, D. Boning and C.-J. Hsieh, Robustness Verification of Tree-based Models (*equal contrib…☆27Updated 6 years ago
- Code accompanying the paper "Preserving Causal Constraints in Counterfactual Explanations for Machine Learning Classifiers"☆31Updated 2 years ago
- All about explainable AI, algorithmic fairness and more☆110Updated last year
- A lightweight implementation of removal-based explanations for ML models.☆58Updated 4 years ago
- Causal Explanation (CXPlain) is a method for explaining the predictions of any machine-learning model.☆131Updated 5 years ago
- Using / reproducing ACD from the paper "Hierarchical interpretations for neural network predictions" 🧠 (ICLR 2019)☆129Updated 4 years ago
- This repository contains the full code for the "Towards fairness in machine learning with adversarial networks" blog post.☆118Updated 4 years ago
- Fair Empirical Risk Minimization (FERM)☆37Updated 4 years ago
- For calculating Shapley values via linear regression.☆70Updated 4 years ago
- Supervised Local Modeling for Interpretability☆29Updated 6 years ago
- Datasets derived from US census data☆268Updated last year
- ML models often mispredict, and it is hard to tell when and why. We present a data mining based approach to discover whether there is a c…☆18Updated 3 years ago
- PyTorch package to train and audit ML models for Individual Fairness☆66Updated 3 months ago
- Measuring data importance over ML pipelines using the Shapley value.☆43Updated last week
- Provably Robust Boosted Decision Stumps and Trees against Adversarial Attacks [NeurIPS 2019]☆50Updated 5 years ago
- 💱 A curated list of data valuation (DV) to design your next data marketplace☆125Updated 6 months ago
- ☆20Updated 6 years ago