lvhimabindu / interpretable_decision_setsLinks
☆20Updated 7 years ago
Alternatives and similar repositories for interpretable_decision_sets
Users that are interested in interpretable_decision_sets are comparing it to the libraries listed below
Sorting:
- Model Agnostic Counterfactual Explanations☆88Updated 3 years ago
- python tools to check recourse in linear classification☆76Updated 5 years ago
- Python code for training fair logistic regression classifiers.☆191Updated 4 years ago
- Datasets derived from US census data☆276Updated last year
- All about explainable AI, algorithmic fairness and more☆110Updated 2 years ago
- Code and data for the experiments in "On Fairness and Calibration"☆51Updated 3 years ago
- LOcal Rule-based Exlanations☆54Updated 2 years ago
- ☆125Updated 4 years ago
- An implementation of IDS (Interpretable Decision Sets) algorithm.☆24Updated 4 years ago
- Comparing fairness-aware machine learning techniques.☆161Updated 3 years ago
- Versatile Verification of Tree Ensembles☆20Updated last year
- Code for "Counterfactual Fairness" (NIPS2017)☆55Updated 7 years ago
- Automated Scalable Bayesian Inference☆131Updated 4 years ago
- Generalized Optimal Sparse Decision Trees☆70Updated last year
- Fair Empirical Risk Minimization (FERM)☆37Updated 5 years ago
- Optimal Sparse Decision Trees☆108Updated 2 years ago
- CARLA: A Python Library to Benchmark Algorithmic Recourse and Counterfactual Explanation Algorithms☆298Updated 2 years ago
- A toolbox for differentially private data generation☆130Updated 2 years ago
- ☆316Updated 2 years ago
- Learning Certifiably Optimal Rule Lists☆176Updated 4 years ago
- library for fair auditing and learning of classifiers with respect to rich subgroup fairness.☆32Updated 6 years ago
- This repository contains the full code for the "Towards fairness in machine learning with adversarial networks" blog post.☆119Updated 4 years ago
- Contrastive Explanation (Foil Trees), developed at TNO/Utrecht University☆45Updated 3 years ago
- Multi-Objective Counterfactuals☆43Updated 3 years ago
- [ICML 2019, 20 min long talk] Robust Decision Trees Against Adversarial Examples☆69Updated 6 months ago
- Uncertainty Quantification 360 (UQ360) is an extensible open-source toolkit that can help you estimate, communicate and use uncertainty i…☆268Updated 4 months ago
- Modular Python Toolbox for Fairness, Accountability and Transparency Forensics☆77Updated 2 years ago
- Code to reproduce our paper on probabilistic algorithmic recourse: https://arxiv.org/abs/2006.06831☆37Updated 3 years ago
- Adversarial Attacks on Post Hoc Explanation Techniques (LIME/SHAP)☆84Updated 3 years ago
- Supervised Local Modeling for Interpretability☆29Updated 7 years ago