A unified framework of perturbation and gradient-based attribution methods for Deep Neural Networks interpretability. DeepExplain also includes support for Shapley Values sampling. (ICLR 2018)
☆758Aug 25, 2020Updated 5 years ago
Alternatives and similar repositories for DeepExplain
Users that are interested in DeepExplain are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- A toolbox to iNNvestigate neural networks' predictions!☆1,307Apr 11, 2025Updated last year
- Model interpretability and understanding for PyTorch☆5,614Updated this week
- The LRP Toolbox provides simple and accessible stand-alone implementations of LRP for artificial neural networks supporting Matlab and Py…☆336Jun 13, 2022Updated 3 years ago
- Attributing predictions made by the Inception network using the Integrated Gradients method☆651Feb 23, 2022Updated 4 years ago
- Interpretability Methods for tf.keras models with Tensorflow 2.x☆1,037Jun 3, 2024Updated last year
- GPU virtual machines on DigitalOcean Gradient AI • AdGet to production fast with high-performance AMD and NVIDIA GPUs you can spin up in seconds. The definition of operational simplicity.
- Keras implementation for DASP: Deep Approximate Shapley Propagation (ICML 2019)☆62Jul 1, 2019Updated 6 years ago
- A game theoretic approach to explain the output of any machine learning model.☆25,355Updated this week
- Layer-wise Relevance Propagation (LRP) for LSTMs.☆225Apr 24, 2020Updated 6 years ago
- A collection of infrastructure and tools for research in neural network interpretability.☆4,703Feb 6, 2023Updated 3 years ago
- ☆917Mar 19, 2023Updated 3 years ago
- Algorithms for explaining machine learning models☆2,626Oct 17, 2025Updated 6 months ago
- Fit interpretable models. Explain blackbox machine learning.☆6,840Updated this week
- Framework-agnostic implementation for state-of-the-art saliency methods (XRAI, BlurIG, SmoothGrad, and more).☆994Mar 20, 2024Updated 2 years ago
- TF MOtif Discovery from Importance SCOres☆178Feb 20, 2026Updated 2 months ago
- GPUs on demand by Runpod - Special Offer Available • AdRun AI, ML, and HPC workloads on powerful cloud GPUs—without limits or wasted spend. Deploy GPUs in under a minute and pay by the second.
- Tensorflow tutorial for various Deep Neural Network visualization techniques☆345Aug 22, 2020Updated 5 years ago
- A curated list of awesome responsible machine learning resources.☆4,016Mar 16, 2026Updated last month
- Preprint/draft article/blog on some explainable machine learning misconceptions. WIP!☆29Jul 13, 2019Updated 6 years ago
- Code for "High-Precision Model-Agnostic Explanations" paper☆813Jul 19, 2022Updated 3 years ago
- Lime: Explaining the predictions of any machine learning classifier☆12,121Jul 25, 2024Updated last year
- ☆114Nov 21, 2022Updated 3 years ago
- Code for the paper: Towards Better Understanding Attribution Methods. CVPR 2022.☆17Jun 13, 2022Updated 3 years ago
- Supervised Local Modeling for Interpretability☆29Oct 27, 2018Updated 7 years ago
- Official repository for "Bridging Adversarial Robustness and Gradient Interpretability".☆29May 2, 2019Updated 6 years ago
- Bare Metal GPUs on DigitalOcean Gradient AI • AdPurpose-built for serious AI teams training foundational models, running large-scale inference, and pushing the boundaries of what's possible.
- Code for using CDEP from the paper "Interpretations are useful: penalizing explanations to align neural networks with prior knowledge" ht…☆127Mar 22, 2021Updated 5 years ago
- Code for the TCAV ML interpretability project☆653Feb 5, 2026Updated 2 months ago
- Python/Keras implementation of integrated gradients presented in "Axiomatic Attribution for Deep Networks" for explaining any model defin…☆217Apr 28, 2018Updated 8 years ago
- A repository for explaining feature attributions and feature interactions in deep neural networks.☆192Jan 16, 2022Updated 4 years ago
- A lightweight implementation of removal-based explanations for ML models.☆59Jul 19, 2021Updated 4 years ago
- ☆124May 10, 2021Updated 4 years ago
- Understanding Deep Networks via Extremal Perturbations and Smooth Masks☆350Jul 22, 2020Updated 5 years ago
- Pytorch Implementation of recent visual attribution methods for model interpretability☆146Feb 27, 2020Updated 6 years ago
- ☆262Dec 10, 2019Updated 6 years ago
- Deploy open-source AI quickly and easily - Special Bonus Offer • AdRunpod Hub is built for open source. One-click deployment and autoscaling endpoints without provisioning your own infrastructure.
- Model zoo for genomics☆174Dec 17, 2025Updated 4 months ago
- HIVE: Evaluating the Human Interpretability of Visual Explanations (ECCV 2022)☆22Jan 19, 2023Updated 3 years ago
- Code/figures in Right for the Right Reasons☆57Dec 29, 2020Updated 5 years ago
- Generate Diverse Counterfactual Explanations for any machine learning model.☆1,508Jul 13, 2025Updated 9 months ago
- ☆100Mar 29, 2018Updated 8 years ago
- Using / reproducing ACD from the paper "Hierarchical interpretations for neural network predictions" 🧠 (ICLR 2019)☆126Aug 25, 2021Updated 4 years ago
- Tools for training explainable models using attribution priors.☆126Mar 19, 2021Updated 5 years ago