marcoancona / DeepExplainLinks
A unified framework of perturbation and gradient-based attribution methods for Deep Neural Networks interpretability. DeepExplain also includes support for Shapley Values sampling. (ICLR 2018)
☆761Updated 5 years ago
Alternatives and similar repositories for DeepExplain
Users that are interested in DeepExplain are comparing it to the libraries listed below
Sorting:
- Public facing deeplift repo☆873Updated 3 years ago
- A toolbox to iNNvestigate neural networks' predictions!☆1,307Updated 8 months ago
- The LRP Toolbox provides simple and accessible stand-alone implementations of LRP for artificial neural networks supporting Matlab and Py…☆335Updated 3 years ago
- Attributing predictions made by the Inception network using the Integrated Gradients method☆643Updated 3 years ago
- Code for the TCAV ML interpretability project☆647Updated 6 months ago
- ☆919Updated 2 years ago
- Code for "High-Precision Model-Agnostic Explanations" paper☆812Updated 3 years ago
- Interesting resources related to XAI (Explainable Artificial Intelligence)☆841Updated 3 years ago
- Framework-agnostic implementation for state-of-the-art saliency methods (XRAI, BlurIG, SmoothGrad, and more).☆988Updated last year
- Layer-wise Relevance Propagation (LRP) for LSTMs.☆226Updated 5 years ago
- Bayesian Deep Learning Benchmarks☆672Updated 2 years ago
- Interpretability Methods for tf.keras models with Tensorflow 2.x☆1,036Updated last year
- Code for all experiments.☆319Updated 4 years ago
- Interesting resources related to Explainable Artificial Intelligence, Interpretable Machine Learning, Interactive Machine Learning, Human…☆74Updated 3 years ago
- Quantus is an eXplainable AI toolkit for responsible evaluation of neural network explanations☆634Updated 5 months ago
- Tensorflow tutorial for various Deep Neural Network visualization techniques☆346Updated 5 years ago
- Implementation of Layerwise Relevance Propagation for heatmapping "deep" layers☆98Updated 7 years ago
- Building a Bayesian deep learning classifier☆492Updated 8 years ago
- All about explainable AI, algorithmic fairness and more☆110Updated 2 years ago
- Code for using CDEP from the paper "Interpretations are useful: penalizing explanations to align neural networks with prior knowledge" ht…☆128Updated 4 years ago
- OpenXAI : Towards a Transparent Evaluation of Model Explanations☆252Updated last year
- For calculating global feature importance using Shapley values.☆282Updated last week
- ☆135Updated 6 years ago
- Adversarial Attacks on Post Hoc Explanation Techniques (LIME/SHAP)☆85Updated 3 years ago
- Data Shapley: Equitable Valuation of Data for Machine Learning☆284Updated last year
- ☆622Updated 2 years ago
- Zennit is a high-level framework in Python using PyTorch for explaining/exploring neural networks using attribution methods like LRP.☆239Updated 4 months ago
- A simple way to calibrate your neural network.☆1,166Updated 4 months ago
- A PyTorch implementation of Neighbourhood Components Analysis.☆399Updated 5 years ago
- Estimating and plotting the decision boundary (decision surface) of machine learning classifiers in higher dimensions (scikit-learn compa…☆235Updated 2 years ago