marcoancona / DeepExplain
A unified framework of perturbation and gradient-based attribution methods for Deep Neural Networks interpretability. DeepExplain also includes support for Shapley Values sampling. (ICLR 2018)
☆750Updated 4 years ago
Alternatives and similar repositories for DeepExplain:
Users that are interested in DeepExplain are comparing it to the libraries listed below
- Public facing deeplift repo☆853Updated 3 years ago
- A toolbox to iNNvestigate neural networks' predictions!☆1,295Updated 3 weeks ago
- The LRP Toolbox provides simple and accessible stand-alone implementations of LRP for artificial neural networks supporting Matlab and Py…☆332Updated 2 years ago
- Attributing predictions made by the Inception network using the Integrated Gradients method☆624Updated 3 years ago
- Code for "High-Precision Model-Agnostic Explanations" paper☆802Updated 2 years ago
- ☆916Updated 2 years ago
- Code for the TCAV ML interpretability project☆639Updated 9 months ago
- Interesting resources related to XAI (Explainable Artificial Intelligence)☆827Updated 2 years ago
- Interpretability Methods for tf.keras models with Tensorflow 2.x☆1,025Updated 11 months ago
- Framework-agnostic implementation for state-of-the-art saliency methods (XRAI, BlurIG, SmoothGrad, and more).☆973Updated last year
- Code for all experiments.☆318Updated 4 years ago
- ☆592Updated last year
- Layer-wise Relevance Propagation (LRP) for LSTMs.☆224Updated 5 years ago
- ☆264Updated 5 years ago
- Tensorflow tutorial for various Deep Neural Network visualization techniques☆347Updated 4 years ago
- Bayesian Deep Learning Benchmarks☆670Updated 2 years ago
- Implementation of Layerwise Relevance Propagation for heatmapping "deep" layers☆98Updated 6 years ago
- Literature survey, paper reviews, experimental setups and a collection of implementations for baselines methods for predictive uncertaint…☆624Updated 2 years ago
- Using / reproducing ACD from the paper "Hierarchical interpretations for neural network predictions" 🧠 (ICLR 2019)☆128Updated 3 years ago
- Tuning hyperparams fast with Hyperband☆593Updated 6 years ago
- Quantus is an eXplainable AI toolkit for responsible evaluation of neural network explanations☆598Updated 3 months ago
- ☆134Updated 5 years ago
- PyTorch implementation of Interpretable Explanations of Black Boxes by Meaningful Perturbation☆336Updated 3 years ago
- ☆125Updated 3 years ago
- Code for using CDEP from the paper "Interpretations are useful: penalizing explanations to align neural networks with prior knowledge" ht…☆127Updated 4 years ago
- For calculating global feature importance using Shapley values.☆268Updated this week
- Towards Automatic Concept-based Explanations☆159Updated last year
- Code and documentation for experiments in the TreeExplainer paper☆185Updated 5 years ago
- H2O.ai Machine Learning Interpretability Resources☆488Updated 4 years ago
- A repository for explaining feature attributions and feature interactions in deep neural networks.☆187Updated 3 years ago