A unified framework of perturbation and gradient-based attribution methods for Deep Neural Networks interpretability. DeepExplain also includes support for Shapley Values sampling. (ICLR 2018)
☆762Aug 25, 2020Updated 5 years ago
Alternatives and similar repositories for DeepExplain
Users that are interested in DeepExplain are comparing it to the libraries listed below
Sorting:
- Public facing deeplift repo☆872Apr 28, 2022Updated 3 years ago
- A toolbox to iNNvestigate neural networks' predictions!☆1,307Apr 11, 2025Updated 10 months ago
- Model interpretability and understanding for PyTorch☆5,560Updated this week
- The LRP Toolbox provides simple and accessible stand-alone implementations of LRP for artificial neural networks supporting Matlab and Py…☆335Jun 13, 2022Updated 3 years ago
- Interpretability Methods for tf.keras models with Tensorflow 2.x☆1,036Jun 3, 2024Updated last year
- Attributing predictions made by the Inception network using the Integrated Gradients method☆644Feb 23, 2022Updated 4 years ago
- Keras implementation for DASP: Deep Approximate Shapley Propagation (ICML 2019)☆62Jul 1, 2019Updated 6 years ago
- A collection of infrastructure and tools for research in neural network interpretability.☆4,704Feb 6, 2023Updated 3 years ago
- A game theoretic approach to explain the output of any machine learning model.☆25,072Feb 20, 2026Updated last week
- Algorithms for explaining machine learning models☆2,612Oct 17, 2025Updated 4 months ago
- Fit interpretable models. Explain blackbox machine learning.☆6,802Updated this week
- Layer-wise Relevance Propagation (LRP) for LSTMs.☆226Apr 24, 2020Updated 5 years ago
- ☆916Mar 19, 2023Updated 2 years ago
- Framework-agnostic implementation for state-of-the-art saliency methods (XRAI, BlurIG, SmoothGrad, and more).☆992Mar 20, 2024Updated last year
- Lime: Explaining the predictions of any machine learning classifier☆12,101Jul 25, 2024Updated last year
- A curated list of awesome responsible machine learning resources.☆3,988Feb 18, 2026Updated last week
- Code for "High-Precision Model-Agnostic Explanations" paper☆814Jul 19, 2022Updated 3 years ago
- TF MOtif Discovery from Importance SCOres☆167Feb 20, 2026Updated last week
- Code for the TCAV ML interpretability project☆652Feb 5, 2026Updated 3 weeks ago
- ☆125May 10, 2021Updated 4 years ago
- Understanding Deep Networks via Extremal Perturbations and Smooth Masks☆349Jul 22, 2020Updated 5 years ago
- Python/Keras implementation of integrated gradients presented in "Axiomatic Attribution for Deep Networks" for explaining any model defin…☆217Apr 28, 2018Updated 7 years ago
- Supervised Local Modeling for Interpretability☆29Oct 27, 2018Updated 7 years ago
- ☆113Nov 21, 2022Updated 3 years ago
- Code for using CDEP from the paper "Interpretations are useful: penalizing explanations to align neural networks with prior knowledge" ht…☆128Mar 22, 2021Updated 4 years ago
- A lightweight implementation of removal-based explanations for ML models.☆59Jul 19, 2021Updated 4 years ago
- HIVE: Evaluating the Human Interpretability of Visual Explanations (ECCV 2022)☆22Jan 19, 2023Updated 3 years ago
- Using / reproducing ACD from the paper "Hierarchical interpretations for neural network predictions" 🧠 (ICLR 2019)☆129Aug 25, 2021Updated 4 years ago
- ☆263Dec 10, 2019Updated 6 years ago
- A repository for explaining feature attributions and feature interactions in deep neural networks.☆193Jan 16, 2022Updated 4 years ago
- Preprint/draft article/blog on some explainable machine learning misconceptions. WIP!☆29Jul 13, 2019Updated 6 years ago
- Code/figures in Right for the Right Reasons☆57Dec 29, 2020Updated 5 years ago
- Generate Diverse Counterfactual Explanations for any machine learning model.☆1,499Jul 13, 2025Updated 7 months ago
- Interpretability and explainability of data and machine learning models☆1,761Feb 26, 2025Updated last year
- Pytorch Implementation of recent visual attribution methods for model interpretability☆146Feb 27, 2020Updated 6 years ago
- For calculating global feature importance using Shapley values.☆284Updated this week
- H2O.ai Machine Learning Interpretability Resources☆491Dec 12, 2020Updated 5 years ago
- PyTorch implementation of Interpretable Explanations of Black Boxes by Meaningful Perturbation☆338Nov 30, 2021Updated 4 years ago
- Neural network visualization toolkit for keras☆2,996Feb 7, 2022Updated 4 years ago