ayong8 / FairSight
A visual analytic system for fair data-driven decision making
☆25Updated 2 years ago
Alternatives and similar repositories for FairSight:
Users that are interested in FairSight are comparing it to the libraries listed below
- this repo might get accepted☆28Updated 4 years ago
- Preprint/draft article/blog on some explainable machine learning misconceptions. WIP!☆28Updated 5 years ago
- A library that implements fairness-aware machine learning algorithms☆124Updated 4 years ago
- Python package for creating rule-based explanations for classifiers.☆60Updated 5 years ago
- Supervised Local Modeling for Interpretability☆28Updated 6 years ago
- Practical ideas on securing machine learning models☆36Updated 3 years ago
- FairVis: Visual Analytics for Discovering Intersectional Bias in Machine Learning☆38Updated last year
- automatic data slicing☆34Updated 3 years ago
- python tools to check recourse in linear classification☆76Updated 4 years ago
- Comparing fairness-aware machine learning techniques.☆159Updated 2 years ago
- Paper and talk from KDD 2019 XAI Workshop☆20Updated 4 years ago
- This is a benchmark to evaluate machine learning local explanaitons quality generated from any explainer for text and image data☆30Updated 3 years ago
- Toolkit for Auditing and Mitigating Bias and Fairness of Machine Learning Systems 🔎🤖🧰☆96Updated last year
- ☆58Updated 4 years ago
- Proposal Documents for Fairlearn☆9Updated 4 years ago
- A collection of machine learning model cards and datasheets.☆75Updated 10 months ago
- Hands-on tutorial on ML Fairness☆71Updated last year
- Reading history for Fair ML Reading Group in Melbourne☆36Updated 3 years ago
- Modular Python Toolbox for Fairness, Accountability and Transparency Forensics☆77Updated last year
- Slides, videos and other potentially useful artifacts from various presentations on responsible machine learning.☆22Updated 5 years ago
- Running Prodigy for a team of annotators☆53Updated 4 years ago
- ☆134Updated 5 years ago
- ☆20Updated 4 years ago
- Code repository for our paper "Failing Loudly: An Empirical Study of Methods for Detecting Dataset Shift": https://arxiv.org/abs/1810.119…☆104Updated last year
- Contrastive Explanation (Foil Trees), developed at TNO/Utrecht University☆45Updated 2 years ago
- Accompanying source code for "Runaway Feedback Loops in Predictive Policing"☆16Updated 7 years ago
- ☆102Updated last year
- An open source package that evaluate the fairness of a Neural Network using a fairness metric called p% rule.☆18Updated 6 years ago
- Awesome list of AI Fairness tools, research papers, tutorials and any other relevant materials. For use by data scientists, AI engineers …☆16Updated 6 years ago
- Model Agnostic Counterfactual Explanations☆87Updated 2 years ago