☆113Nov 21, 2022Updated 3 years ago
Alternatives and similar repositories for sanity_checks_saliency
Users that are interested in sanity_checks_saliency are comparing it to the libraries listed below
Sorting:
- reference implementation for "explanations can be manipulated and geometry is to blame"☆37Jul 24, 2022Updated 3 years ago
- Showing the relationship between ImageNet ID and labels and pytorch pre-trained model output ID and labels☆10Oct 11, 2020Updated 5 years ago
- Code for the Paper 'On the Connection Between Adversarial Robustness and Saliency Map Interpretability' by C. Etmann, S. Lunz, P. Maass, …☆16May 9, 2019Updated 6 years ago
- IBD: Interpretable Basis Decomposition for Visual Explanation☆52Nov 28, 2018Updated 7 years ago
- Framework-agnostic implementation for state-of-the-art saliency methods (XRAI, BlurIG, SmoothGrad, and more).☆992Mar 20, 2024Updated last year
- ☆51Aug 29, 2020Updated 5 years ago
- Understanding Deep Networks via Extremal Perturbations and Smooth Masks☆349Jul 22, 2020Updated 5 years ago
- Interpretation of Neural Network is Fragile☆36May 1, 2024Updated last year
- This repository provides a PyTorch implementation of "Fooling Neural Network Interpretations via Adversarial Model Manipulation". Our pap…☆23Dec 19, 2020Updated 5 years ago
- code release for the paper "On Completeness-aware Concept-Based Explanations in Deep Neural Networks"☆54Mar 25, 2022Updated 3 years ago
- Do input gradients highlight discriminative features? [NeurIPS 2021] (https://arxiv.org/abs/2102.12781)☆13Jan 10, 2023Updated 3 years ago
- Repository of the paper "Defining Locality for Surrogates in Post-hoc Interpretablity" published at 2018 ICML Workshop on Human Interpret…☆17Nov 9, 2021Updated 4 years ago
- This repository contains the code for implementing Bidirectional Relevance scores for Digital Histopathology, which was used for the resu…☆16Mar 24, 2023Updated 2 years ago
- Python implementation for evaluating explanations presented in "On the (In)fidelity and Sensitivity for Explanations" in NeurIPS 2019 for…☆25Feb 23, 2022Updated 4 years ago
- Code for the TCAV ML interpretability project☆653Feb 5, 2026Updated last month
- Explaining Image Classifiers by Counterfactual Generation☆28Apr 23, 2022Updated 3 years ago
- Towards Automatic Concept-based Explanations☆162May 1, 2024Updated last year
- Codes for reproducing the contrastive explanation in “Explanations based on the Missing: Towards Contrastive Explanations with Pertinent…☆54Jul 4, 2018Updated 7 years ago
- This is the official implementation for the paper "Learning to Scaffold: Optimizing Model Explanations for Teaching"☆20May 19, 2022Updated 3 years ago
- A lightweight implementation of removal-based explanations for ML models.☆59Jul 19, 2021Updated 4 years ago
- Code for AAAI 2018 accepted paper: "Improving the Adversarial Robustness and Interpretability of Deep Neural Networks by Regularizing the…☆55Dec 4, 2022Updated 3 years ago
- Visualizing Deep Neural Network Decisions: Prediction Difference Analysis☆122Oct 31, 2017Updated 8 years ago
- Attributing predictions made by the Inception network using the Integrated Gradients method☆645Feb 23, 2022Updated 4 years ago
- Official Code Repo for the Paper: "How does This Interaction Affect Me? Interpretable Attribution for Feature Interactions", In NeurIPS 2…☆42Oct 31, 2022Updated 3 years ago
- Detect model's attention☆171Jul 2, 2020Updated 5 years ago
- ☆123Mar 15, 2022Updated 3 years ago
- Notebooks for JHU EN 601.320/420/620☆10May 1, 2019Updated 6 years ago
- Adversarial learning by utilizing model interpretation☆10Oct 19, 2018Updated 7 years ago
- Unsupervised-Data-Augmentation-PyTorch☆12Dec 8, 2022Updated 3 years ago
- My practical approach to learning Neural Network concepts☆10Jun 11, 2019Updated 6 years ago
- Code for "Using Embeddings to Correct for Unobserved Confounding"☆10May 31, 2019Updated 6 years ago
- ☆11Dec 7, 2020Updated 5 years ago
- code release for Representer point Selection for Explaining Deep Neural Network in NeurIPS 2018☆67Sep 13, 2021Updated 4 years ago
- Code for "Robustness May Be at Odds with Accuracy"☆91Mar 24, 2023Updated 2 years ago
- Saliency calculation module for Chainer☆12May 28, 2019Updated 6 years ago
- Tools for robustness evaluation in interpretability methods☆11Jun 25, 2021Updated 4 years ago
- Towards Visual Explanations for Convolutional Neural Networks via Input Resampling☆13Aug 16, 2017Updated 8 years ago
- A toolbox to iNNvestigate neural networks' predictions!☆1,307Apr 11, 2025Updated 10 months ago
- There and Back Again: Revisiting Backpropagation Saliency Methods (CVPR 2020)☆53Apr 7, 2020Updated 5 years ago