lvhimabindu / interpretable_decision_sets
☆20Updated 6 years ago
Alternatives and similar repositories for interpretable_decision_sets:
Users that are interested in interpretable_decision_sets are comparing it to the libraries listed below
- Code to reproduce our paper on probabilistic algorithmic recourse: https://arxiv.org/abs/2006.06831☆36Updated 2 years ago
- An implementation of IDS (Interpretable Decision Sets) algorithm.☆24Updated 4 years ago
- Code accompanying the paper "Preserving Causal Constraints in Counterfactual Explanations for Machine Learning Classifiers"☆31Updated 2 years ago
- python tools to check recourse in linear classification☆76Updated 4 years ago
- Code and data for the experiments in "On Fairness and Calibration"☆51Updated 3 years ago
- Model Agnostic Counterfactual Explanations☆87Updated 2 years ago
- Bayesian or-of-and☆34Updated 3 years ago
- ☆31Updated 3 years ago
- Code/figures in Right for the Right Reasons☆55Updated 4 years ago
- library for fair auditing and learning of classifiers with respect to rich subgroup fairness.☆32Updated 5 years ago
- A new framework to generate interpretable classification rules☆17Updated 2 years ago
- Python Interface of the Scalable Bayesian Rule Lists☆19Updated 5 years ago
- Supervised Local Modeling for Interpretability☆28Updated 6 years ago
- A lightweight implementation of removal-based explanations for ML models.☆59Updated 3 years ago
- Code for "Counterfactual Fairness" (NIPS2017)☆52Updated 6 years ago
- ☆32Updated 3 years ago
- Fair Empirical Risk Minimization (FERM)☆37Updated 4 years ago
- [ICML 2019, 20 min long talk] Robust Decision Trees Against Adversarial Examples☆67Updated 2 years ago
- MDL Complexity computations and experiments from the paper "Revisiting complexity and the bias-variance tradeoff".☆18Updated last year
- LOcal Rule-based Exlanations☆53Updated last year
- Multiple Generalized Additive Models implemented in Python (EBM, XGB, Spline, FLAM). Code for our KDD 2021 paper "How Interpretable and T…☆12Updated 3 years ago
- Using / reproducing DAC from the paper "Disentangled Attribution Curves for Interpreting Random Forests and Boosted Trees"☆27Updated 4 years ago
- XReason - formal reasoning about explanations for ML models☆16Updated last year
- The code of AAAI 2020 paper "Transparent Classification with Multilayer Logical Perceptrons and Random Binarization".☆24Updated last year
- Contrastive Explanation (Foil Trees), developed at TNO/Utrecht University☆45Updated 2 years ago
- XAI-Bench is a library for benchmarking feature attribution explainability techniques☆64Updated 2 years ago
- Versatile Verification of Tree Ensembles☆17Updated 10 months ago
- ☆20Updated 5 years ago
- Provably Robust Boosted Decision Stumps and Trees against Adversarial Attacks [NeurIPS 2019]☆50Updated 5 years ago
- Multi-Objective Counterfactuals☆41Updated 2 years ago