marcoancona / DeepExplain
A unified framework of perturbation and gradient-based attribution methods for Deep Neural Networks interpretability. DeepExplain also includes support for Shapley Values sampling. (ICLR 2018)
☆745Updated 4 years ago
Alternatives and similar repositories for DeepExplain:
Users that are interested in DeepExplain are comparing it to the libraries listed below
- Public facing deeplift repo☆852Updated 2 years ago
- A toolbox to iNNvestigate neural networks' predictions!☆1,290Updated last year
- Attributing predictions made by the Inception network using the Integrated Gradients method☆615Updated 3 years ago
- The LRP Toolbox provides simple and accessible stand-alone implementations of LRP for artificial neural networks supporting Matlab and Py…☆331Updated 2 years ago
- Code for the TCAV ML interpretability project☆634Updated 7 months ago
- ☆913Updated 2 years ago
- Code for "High-Precision Model-Agnostic Explanations" paper☆799Updated 2 years ago
- Interesting resources related to XAI (Explainable Artificial Intelligence)☆824Updated 2 years ago
- Implementation of Layerwise Relevance Propagation for heatmapping "deep" layers☆98Updated 6 years ago
- Layer-wise Relevance Propagation (LRP) for LSTMs.☆223Updated 4 years ago
- Framework-agnostic implementation for state-of-the-art saliency methods (XRAI, BlurIG, SmoothGrad, and more).☆970Updated last year
- Quantus is an eXplainable AI toolkit for responsible evaluation of neural network explanations☆594Updated last month
- ☆100Updated 6 years ago
- PyTorch implementation of Interpretable Explanations of Black Boxes by Meaningful Perturbation☆334Updated 3 years ago
- Tensorflow tutorial for various Deep Neural Network visualization techniques☆346Updated 4 years ago
- A PyTorch implementation of Neighbourhood Components Analysis.☆400Updated 4 years ago
- Interpretability Methods for tf.keras models with Tensorflow 2.x☆1,024Updated 9 months ago
- A repository for explaining feature attributions and feature interactions in deep neural networks.☆186Updated 3 years ago
- All about explainable AI, algorithmic fairness and more☆107Updated last year
- Code for all experiments.☆315Updated 3 years ago
- Interpretability and explainability of data and machine learning models☆1,665Updated 3 weeks ago
- For calculating global feature importance using Shapley values.☆266Updated this week
- List of relevant resources for machine learning from explanatory supervision☆156Updated 2 months ago
- OpenXAI : Towards a Transparent Evaluation of Model Explanations☆240Updated 7 months ago
- Zennit is a high-level framework in Python using PyTorch for explaining/exploring neural networks using attribution methods like LRP.☆214Updated 8 months ago
- Keras implementation for DASP: Deep Approximate Shapley Propagation (ICML 2019)☆61Updated 5 years ago
- This is the pytorch implementation of the paper - Axiomatic Attribution for Deep Networks.☆182Updated 2 years ago
- Python/Keras implementation of integrated gradients presented in "Axiomatic Attribution for Deep Networks" for explaining any model defin…☆217Updated 6 years ago
- Hyperparameter optimization that enables researchers to experiment, visualize, and scale quickly.☆335Updated 4 years ago
- ☆125Updated 3 years ago