romanlutz / ResponsibleAI
A collection of news articles, books, and papers on Responsible AI cases. The purpose is to study these cases and learn from them to avoid repeating the failures of the past.
☆64Updated 4 years ago
Alternatives and similar repositories for ResponsibleAI:
Users that are interested in ResponsibleAI are comparing it to the libraries listed below
- All about explainable AI, algorithmic fairness and more☆107Updated last year
- Practical ideas on securing machine learning models☆36Updated 3 years ago
- Contrastive Explanation (Foil Trees), developed at TNO/Utrecht University☆45Updated 2 years ago
- A curated list of awesome academic research, books, code of ethics, data sets, institutes, maturity models, newsletters, principles, podc…☆68Updated this week
- Slides, videos and other potentially useful artifacts from various presentations on responsible machine learning.☆22Updated 5 years ago
- python tools to check recourse in linear classification☆75Updated 4 years ago
- ☆102Updated last year
- ☆9Updated 4 years ago
- Talks / presentations / tutorials about Fairlearn and fairness in ML☆22Updated 2 years ago
- A library that implements fairness-aware machine learning algorithms☆125Updated 4 years ago
- Modular Python Toolbox for Fairness, Accountability and Transparency Forensics☆77Updated last year
- Preprint/draft article/blog on some explainable machine learning misconceptions. WIP!☆28Updated 5 years ago
- Comparing fairness-aware machine learning techniques.☆160Updated 2 years ago
- Reading history for Fair ML Reading Group in Melbourne☆36Updated 3 years ago
- Proposal Documents for Fairlearn☆9Updated 4 years ago
- Hands-on tutorial on ML Fairness☆71Updated last year
- ☆58Updated 3 years ago
- ☆35Updated last week
- Applied Machine Learning with Python☆78Updated 11 months ago
- A visual analytic system for fair data-driven decision making☆25Updated 2 years ago
- ⬛ Python Individual Conditional Expectation Plot Toolbox☆165Updated 4 years ago
- ☆48Updated 5 years ago
- Adversarial Attacks on Post Hoc Explanation Techniques (LIME/SHAP)☆82Updated 2 years ago
- Code for reproducing results in Delayed Impact of Fair Machine Learning (Liu et al 2018)☆14Updated 2 years ago
- Responsible AI knowledge base☆99Updated last year
- this repo might get accepted☆28Updated 4 years ago
- Guidelines for the responsible use of explainable AI and machine learning.☆17Updated 2 years ago
- Bias Auditing & Fair ML Toolkit☆709Updated 6 months ago
- Python implementation of R package breakDown☆42Updated last year
- A machine learning testing framework for sklearn and pandas. The goal is to help folks assess whether things have changed over time.☆102Updated 3 years ago