marcoancona / DeepExplainLinks
A unified framework of perturbation and gradient-based attribution methods for Deep Neural Networks interpretability. DeepExplain also includes support for Shapley Values sampling. (ICLR 2018)
☆754Updated 4 years ago
Alternatives and similar repositories for DeepExplain
Users that are interested in DeepExplain are comparing it to the libraries listed below
Sorting:
- Public facing deeplift repo☆861Updated 3 years ago
- Attributing predictions made by the Inception network using the Integrated Gradients method☆631Updated 3 years ago
- A toolbox to iNNvestigate neural networks' predictions!☆1,303Updated 3 months ago
- The LRP Toolbox provides simple and accessible stand-alone implementations of LRP for artificial neural networks supporting Matlab and Py…☆334Updated 3 years ago
- Code for the TCAV ML interpretability project☆643Updated last month
- ☆915Updated 2 years ago
- Interesting resources related to XAI (Explainable Artificial Intelligence)☆837Updated 3 years ago
- Interpretability Methods for tf.keras models with Tensorflow 2.x☆1,029Updated last year
- Code for "High-Precision Model-Agnostic Explanations" paper☆805Updated 3 years ago
- Layer-wise Relevance Propagation (LRP) for LSTMs.☆225Updated 5 years ago
- Framework-agnostic implementation for state-of-the-art saliency methods (XRAI, BlurIG, SmoothGrad, and more).☆979Updated last year
- Bayesian Deep Learning Benchmarks☆672Updated 2 years ago
- ☆100Updated 7 years ago
- Data Shapley: Equitable Valuation of Data for Machine Learning☆274Updated last year
- Implementation of Layerwise Relevance Propagation for heatmapping "deep" layers☆99Updated 6 years ago
- Quantus is an eXplainable AI toolkit for responsible evaluation of neural network explanations☆612Updated last week
- Tensorflow tutorial for various Deep Neural Network visualization techniques☆346Updated 4 years ago
- Interesting resources related to Explainable Artificial Intelligence, Interpretable Machine Learning, Interactive Machine Learning, Human…☆73Updated 2 years ago
- All about explainable AI, algorithmic fairness and more☆110Updated last year
- Adversarial Attacks on Post Hoc Explanation Techniques (LIME/SHAP)☆82Updated 2 years ago
- Code for all experiments.☆318Updated 4 years ago
- Code for using CDEP from the paper "Interpretations are useful: penalizing explanations to align neural networks with prior knowledge" ht…☆128Updated 4 years ago
- ☆134Updated 5 years ago
- Using / reproducing ACD from the paper "Hierarchical interpretations for neural network predictions" 🧠 (ICLR 2019)☆129Updated 3 years ago
- ☆635Updated 3 years ago
- Hyperparameter optimization that enables researchers to experiment, visualize, and scale quickly.☆338Updated 4 years ago
- A repository for explaining feature attributions and feature interactions in deep neural networks.☆188Updated 3 years ago
- This repository contains the full code for the "Towards fairness in machine learning with adversarial networks" blog post.☆118Updated 4 years ago
- ☆590Updated 2 years ago
- Zennit is a high-level framework in Python using PyTorch for explaining/exploring neural networks using attribution methods like LRP.☆228Updated 2 weeks ago