algofairness / BlackBoxAuditing
Research code for auditing and exploring black box machine-learning models.
☆130Updated last year
Related projects ⓘ
Alternatives and complementary repositories for BlackBoxAuditing
- Comparing fairness-aware machine learning techniques.☆159Updated last year
- ☆56Updated 3 years ago
- ☆38Updated 7 months ago
- ☆360Updated 3 years ago
- Repo for GIAN fairness course☆54Updated 7 years ago
- python tools to check recourse in linear classification☆74Updated 3 years ago
- A library that implements fairness-aware machine learning algorithms☆124Updated 4 years ago
- Accompanying source code for "Runaway Feedback Loops in Predictive Policing"☆16Updated 6 years ago
- ☆31Updated 9 months ago
- Python code for training fair logistic regression classifiers.☆189Updated 2 years ago
- ☆9Updated 3 years ago
- Themis™ is a software fairness tester.☆101Updated 4 years ago
- Preprint/draft article/blog on some explainable machine learning misconceptions. WIP!☆28Updated 5 years ago
- This repository contains the full code for the "Towards fairness in machine learning with adversarial networks" blog post.☆117Updated 3 years ago
- simple customizable risk scores in python☆132Updated last year
- Simplified tree-based classifier and regressor for interpretable machine learning (scikit-learn compatible)☆47Updated 3 years ago
- ☆131Updated 5 years ago
- Experiments for AAAI anchor paper☆61Updated 6 years ago
- Code and data for the experiments in "On Fairness and Calibration"☆49Updated 2 years ago
- Learning Certifiably Optimal Rule Lists☆172Updated 3 years ago
- Python package for creating rule-based explanations for classifiers.☆59Updated 4 years ago
- detect demographic differences in the output of machine learning models or other assessments☆312Updated 4 years ago
- Interpretable ML package designed to explain any machine learning model.☆61Updated 6 years ago
- A visual analytic system for fair data-driven decision making☆25Updated last year
- Adversarial Attacks on Post Hoc Explanation Techniques (LIME/SHAP)☆80Updated last year
- H2O.ai Machine Learning Interpretability Resources☆483Updated 3 years ago
- An open source package that evaluate the fairness of a Neural Network using a fairness metric called p% rule.☆18Updated 5 years ago
- NEXT is a machine learning system that runs in the cloud and makes it easy to develop, evaluate, and apply active learning in the real-wo…☆160Updated 4 months ago
- simple customizable scoring systems in python☆41Updated last year
- Modular Python Toolbox for Fairness, Accountability and Transparency Forensics☆75Updated last year