Attributing predictions made by the Inception network using the Integrated Gradients method
☆647Feb 23, 2022Updated 4 years ago
Alternatives and similar repositories for Integrated-Gradients
Users that are interested in Integrated-Gradients are comparing it to the libraries listed below
Sorting:
- This is the pytorch implementation of the paper - Axiomatic Attribution for Deep Networks.☆191Mar 25, 2022Updated 3 years ago
- Public facing deeplift repo☆873Apr 28, 2022Updated 3 years ago
- A unified framework of perturbation and gradient-based attribution methods for Deep Neural Networks interpretability. DeepExplain also in…☆761Aug 25, 2020Updated 5 years ago
- Python/Keras implementation of integrated gradients presented in "Axiomatic Attribution for Deep Networks" for explaining any model defin…☆216Apr 28, 2018Updated 7 years ago
- Model interpretability and understanding for PyTorch☆5,580Mar 11, 2026Updated last week
- Showing the relationship between ImageNet ID and labels and pytorch pre-trained model output ID and labels☆10Oct 11, 2020Updated 5 years ago
- ☆113Nov 21, 2022Updated 3 years ago
- Framework-agnostic implementation for state-of-the-art saliency methods (XRAI, BlurIG, SmoothGrad, and more).☆994Mar 20, 2024Updated 2 years ago
- Tensorflow implementation of integrated gradients presented in "Axiomatic Attribution for Deep Networks". It explains connections between…☆17Mar 11, 2019Updated 7 years ago
- PyTorch implementation of Interpretable Explanations of Black Boxes by Meaningful Perturbation☆337Nov 30, 2021Updated 4 years ago
- Keras implementation for DASP: Deep Approximate Shapley Propagation (ICML 2019)☆62Jul 1, 2019Updated 6 years ago
- A toolbox to iNNvestigate neural networks' predictions!☆1,307Apr 11, 2025Updated 11 months ago
- Code for the Paper 'On the Connection Between Adversarial Robustness and Saliency Map Interpretability' by C. Etmann, S. Lunz, P. Maass, …☆16May 9, 2019Updated 6 years ago
- Interpretability Methods for tf.keras models with Tensorflow 2.x☆1,036Jun 3, 2024Updated last year
- IBD: Interpretable Basis Decomposition for Visual Explanation☆52Nov 28, 2018Updated 7 years ago
- Lime: Explaining the predictions of any machine learning classifier☆12,105Jul 25, 2024Updated last year
- PyTorch implementation of SmoothTaylor☆15Sep 5, 2021Updated 4 years ago
- SmoothGrad implementation in PyTorch☆172Apr 4, 2021Updated 4 years ago
- Understanding Deep Networks via Extremal Perturbations and Smooth Masks☆349Jul 22, 2020Updated 5 years ago
- Code to reproduce results in our ACL 2018 paper "Did the Model Understand the Question?"☆33Jul 17, 2018Updated 7 years ago
- Very concise example of integrated gradients (a method to reveal areas of attention in input images)☆10Jun 17, 2019Updated 6 years ago
- ☆124May 10, 2021Updated 4 years ago
- reference implementation for "explanations can be manipulated and geometry is to blame"☆37Jul 24, 2022Updated 3 years ago
- A game theoretic approach to explain the output of any machine learning model.☆25,131Mar 12, 2026Updated last week
- Layer-wise Relevance Propagation (LRP) for LSTMs.☆225Apr 24, 2020Updated 5 years ago
- A list of publications on NLP interpretability (Welcome PR)☆168Dec 13, 2020Updated 5 years ago
- A collection of infrastructure and tools for research in neural network interpretability.☆4,703Feb 6, 2023Updated 3 years ago
- Pruning CNN using CNN with toy example☆23Jun 21, 2021Updated 4 years ago
- Visualizing Deep Neural Network Decisions: Prediction Difference Analysis☆122Oct 31, 2017Updated 8 years ago
- This repository contains the code for implementing Bidirectional Relevance scores for Digital Histopathology, which was used for the resu…☆16Mar 24, 2023Updated 2 years ago
- Network Dissection http://netdissect.csail.mit.edu for quantifying interpretability of deep CNNs.☆453Aug 25, 2018Updated 7 years ago
- The LRP Toolbox provides simple and accessible stand-alone implementations of LRP for artificial neural networks supporting Matlab and Py…☆334Jun 13, 2022Updated 3 years ago
- ☆89Oct 8, 2022Updated 3 years ago
- Algorithms for explaining machine learning models☆2,621Oct 17, 2025Updated 5 months ago
- Code/figures in Right for the Right Reasons☆57Dec 29, 2020Updated 5 years ago
- A repository for explaining feature attributions and feature interactions in deep neural networks.☆192Jan 16, 2022Updated 4 years ago
- Adversarial Attacks on Post Hoc Explanation Techniques (LIME/SHAP)☆84Dec 8, 2022Updated 3 years ago
- disentanglement_lib is an open-source library for research on learning disentangled representations.☆1,421May 16, 2021Updated 4 years ago
- Fit interpretable models. Explain blackbox machine learning.☆6,816Updated this week