nesl / ExMatchinaLinks
A Deep Neural Network explanation-by-example library for generating meaningful explanations
☆17Updated 5 years ago
Alternatives and similar repositories for ExMatchina
Users that are interested in ExMatchina are comparing it to the libraries listed below
Sorting:
- Code for "Counterfactual Fairness" (NIPS2017)☆55Updated 7 years ago
- Bivariate Shapley is a Shapley-based method of identifying directional feature interactions and feature redundancy☆20Updated 8 months ago
- Library of transfer learners and domain-adaptive classifiers.☆93Updated 7 years ago
- Python code for training fair logistic regression classifiers.☆191Updated 4 years ago
- The Randomized Conditional Independence Test (RCIT) and the Randomized conditional Correlation Test (RCoT)☆30Updated 6 years ago
- A unified framework of perturbation and gradient-based attribution methods for Deep Neural Networks interpretability. DeepExplain also in…☆760Updated 5 years ago
- OpenXAI : Towards a Transparent Evaluation of Model Explanations☆252Updated last year
- 💡 Adversarial attacks on explanations and how to defend them☆334Updated last year
- This repository provides details of the experimental code in the paper: Instance-based Counterfactual Explanations for Time Series Classi…☆22Updated 4 years ago
- Code for using CDEP from the paper "Interpretations are useful: penalizing explanations to align neural networks with prior knowledge" ht…☆128Updated 4 years ago
- Attributing predictions made by the Inception network using the Integrated Gradients method☆644Updated 3 years ago
- Datasets derived from US census data☆276Updated last year
- Python code of Hilbert-Schmidt Independence Criterion☆90Updated 3 years ago
- Healthcare-specific tools for bias analysis☆39Updated 3 years ago
- Codebase for "Deep Learning for Case-based Reasoning through Prototypes: A Neural Network that Explains Its Predictions" (to appear in AA…☆77Updated 8 years ago
- A benchmark for distribution shift in tabular data☆56Updated last year
- library for fair auditing and learning of classifiers with respect to rich subgroup fairness.☆32Updated 6 years ago
- Tools for training explainable models using attribution priors.☆125Updated 4 years ago
- Machine Learning and Artificial Intelligence for Medicine.☆462Updated 2 years ago
- ☆20Updated 6 years ago
- Codebase for information theoretic shapley values to explain predictive uncertainty.This repo contains the code related to the paperWatso…☆22Updated last year
- Critical difference diagram with Wilcoxon-Holm post-hoc analysis.☆300Updated 3 years ago
- Interesting resources related to Explainable Artificial Intelligence, Interpretable Machine Learning, Interactive Machine Learning, Human…☆75Updated 3 years ago
- How Can I Explain This to You? An Empirical Study of Deep Neural Network Explanation Methods☆24Updated 5 years ago
- An amortized approach for calculating local Shapley value explanations☆105Updated 2 years ago
- The LRP Toolbox provides simple and accessible stand-alone implementations of LRP for artificial neural networks supporting Matlab and Py…☆335Updated 3 years ago
- For calculating global feature importance using Shapley values.☆284Updated 2 weeks ago
- Python/R library for feature selection in neural nets. ("Feature selection using Stochastic Gates", ICML 2020)☆110Updated 3 years ago
- Fair Empirical Risk Minimization (FERM)☆37Updated 5 years ago
- Adversarial Attacks on Post Hoc Explanation Techniques (LIME/SHAP)☆84Updated 3 years ago