columbia / fairtest
☆56Updated 3 years ago
Related projects ⓘ
Alternatives and complementary repositories for fairtest
- ☆38Updated 8 months ago
- Comparing fairness-aware machine learning techniques.☆159Updated last year
- Research code for auditing and exploring black box machine-learning models.☆130Updated last year
- ☆361Updated 3 years ago
- A library that implements fairness-aware machine learning algorithms☆124Updated 4 years ago
- Accompanying source code for "Runaway Feedback Loops in Predictive Policing"☆16Updated 6 years ago
- Themis™ is a software fairness tester.☆102Updated 4 years ago
- A visual analytic system for fair data-driven decision making☆25Updated last year
- ☆9Updated 3 years ago
- python tools to check recourse in linear classification☆75Updated 3 years ago
- Python code for training fair logistic regression classifiers.☆189Updated 2 years ago
- Toolkit for Auditing and Mitigating Bias and Fairness of Machine Learning Systems 🔎🤖🧰☆95Updated last year
- ☆31Updated 9 months ago
- This repository contains the full code for the "Towards fairness in machine learning with adversarial networks" blog post.☆117Updated 3 years ago
- Repo for GIAN fairness course☆54Updated 7 years ago
- Proposal Documents for Fairlearn☆9Updated 4 years ago
- Code and data for the experiments in "On Fairness and Calibration"☆50Updated 2 years ago
- Experiments for AAAI anchor paper☆61Updated 6 years ago
- FairPrep is a design and evaluation framework for fairness-enhancing interventions that treats data as a first-class citizen.☆11Updated last year
- this repo might get accepted☆29Updated 3 years ago
- Code for reproducing results in Delayed Impact of Fair Machine Learning (Liu et al 2018)☆14Updated 2 years ago
- Preprint/draft article/blog on some explainable machine learning misconceptions. WIP!☆28Updated 5 years ago
- A collection of implementations of fair ML algorithms☆12Updated 6 years ago
- The Data Linter identifies potential issues (lints) in your ML training data.☆87Updated 6 years ago
- Supervised Local Modeling for Interpretability☆28Updated 6 years ago
- Adversarial Attacks on Post Hoc Explanation Techniques (LIME/SHAP)☆80Updated last year
- Practical ideas on securing machine learning models☆36Updated 3 years ago
- An open source package that evaluate the fairness of a Neural Network using a fairness metric called p% rule.☆18Updated 6 years ago
- Hands-on tutorial on ML Fairness☆69Updated last year
- ☆313Updated last year