axa-rev-research / locality-interpretable-surrogateLinks
Repository of the paper "Defining Locality for Surrogates in Post-hoc Interpretablity" published at 2018 ICML Workshop on Human Interpretability in Machine Learning (WHI 2018)
☆17Updated 4 years ago
Alternatives and similar repositories for locality-interpretable-surrogate
Users that are interested in locality-interpretable-surrogate are comparing it to the libraries listed below
Sorting:
- Code for "High-Precision Model-Agnostic Explanations" paper☆812Updated 3 years ago
- H2O.ai Machine Learning Interpretability Resources☆491Updated 5 years ago
- ⬛ Python Individual Conditional Expectation Plot Toolbox☆164Updated 5 years ago
- Interesting resources related to Explainable Artificial Intelligence, Interpretable Machine Learning, Interactive Machine Learning, Human…☆75Updated 3 years ago
- python tools to check recourse in linear classification☆77Updated 4 years ago
- LOcal Rule-based Exlanations☆54Updated 2 years ago
- Learning Certifiably Optimal Rule Lists☆176Updated 4 years ago
- Python code for training fair logistic regression classifiers.☆192Updated 3 years ago
- Python implementation of the rulefit algorithm☆432Updated 2 years ago
- A library that implements fairness-aware machine learning algorithms☆125Updated 5 years ago
- Contrastive Explanation (Foil Trees), developed at TNO/Utrecht University☆44Updated 2 years ago
- ☆368Updated 4 years ago
- Interpret Community extends Interpret repository with additional interpretability techniques and utility functions to handle real-world d…☆436Updated 10 months ago
- For calculating global feature importance using Shapley values.☆282Updated last week
- Create sparse and accurate risk scoring systems!☆44Updated last year
- Comparing fairness-aware machine learning techniques.☆160Updated 3 years ago
- All about explainable AI, algorithmic fairness and more☆110Updated 2 years ago
- simple customizable risk scores in python☆142Updated 2 years ago
- ☆919Updated 2 years ago
- Examples of techniques for training interpretable ML models, explaining ML models, and debugging ML models for accuracy, discrimination, …☆679Updated last year
- apricot implements submodular optimization for the purpose of selecting subsets of massive data sets to train machine learning models qui…☆525Updated last month
- Bias Auditing & Fair ML Toolkit☆745Updated 2 weeks ago
- This repository contains the full code for the "Towards fairness in machine learning with adversarial networks" blog post.☆119Updated 4 years ago
- Mixed Effects Random Forest☆238Updated last year
- Interesting resources related to XAI (Explainable Artificial Intelligence)☆841Updated 3 years ago
- Adversarial Attacks on Post Hoc Explanation Techniques (LIME/SHAP)☆85Updated 3 years ago
- moDel Agnostic Language for Exploration and eXplanation☆1,451Updated 2 months ago
- Editing machine learning models to reflect human knowledge and values☆128Updated 2 years ago
- Modular Python Toolbox for Fairness, Accountability and Transparency Forensics☆77Updated 2 years ago
- In this work, we propose a deterministic version of Local Interpretable Model Agnostic Explanations (LIME) and the experimental results o…☆29Updated 2 years ago