adebayoj / sanity_checks_saliencyView external linksLinks
☆113Nov 21, 2022Updated 3 years ago
Alternatives and similar repositories for sanity_checks_saliency
Users that are interested in sanity_checks_saliency are comparing it to the libraries listed below
Sorting:
- reference implementation for "explanations can be manipulated and geometry is to blame"☆37Jul 24, 2022Updated 3 years ago
- Showing the relationship between ImageNet ID and labels and pytorch pre-trained model output ID and labels☆10Oct 11, 2020Updated 5 years ago
- Code for the Paper 'On the Connection Between Adversarial Robustness and Saliency Map Interpretability' by C. Etmann, S. Lunz, P. Maass, …☆16May 9, 2019Updated 6 years ago
- IBD: Interpretable Basis Decomposition for Visual Explanation☆52Nov 28, 2018Updated 7 years ago
- Framework-agnostic implementation for state-of-the-art saliency methods (XRAI, BlurIG, SmoothGrad, and more).☆992Mar 20, 2024Updated last year
- ☆51Aug 29, 2020Updated 5 years ago
- Understanding Deep Networks via Extremal Perturbations and Smooth Masks☆348Jul 22, 2020Updated 5 years ago
- Interpretation of Neural Network is Fragile☆36May 1, 2024Updated last year
- code release for the paper "On Completeness-aware Concept-Based Explanations in Deep Neural Networks"☆54Mar 25, 2022Updated 3 years ago
- This repository contains the code for implementing Bidirectional Relevance scores for Digital Histopathology, which was used for the resu…☆16Mar 24, 2023Updated 2 years ago
- Python implementation for evaluating explanations presented in "On the (In)fidelity and Sensitivity for Explanations" in NeurIPS 2019 for…☆25Feb 23, 2022Updated 3 years ago
- Code for Fong and Vedaldi 2017, "Interpretable Explanations of Black Boxes by Meaningful Perturbation"☆32Sep 25, 2019Updated 6 years ago
- Code for the TCAV ML interpretability project☆650Feb 5, 2026Updated last week
- Explaining Image Classifiers by Counterfactual Generation☆28Apr 23, 2022Updated 3 years ago
- This is the official implementation for the paper "Learning to Scaffold: Optimizing Model Explanations for Teaching"☆19May 19, 2022Updated 3 years ago
- Towards Automatic Concept-based Explanations☆162May 1, 2024Updated last year
- A lightweight implementation of removal-based explanations for ML models.☆59Jul 19, 2021Updated 4 years ago
- ☆20Oct 12, 2021Updated 4 years ago
- Code for AAAI 2018 accepted paper: "Improving the Adversarial Robustness and Interpretability of Deep Neural Networks by Regularizing the…☆55Dec 4, 2022Updated 3 years ago
- Visualizing Deep Neural Network Decisions: Prediction Difference Analysis☆121Oct 31, 2017Updated 8 years ago
- Attributing predictions made by the Inception network using the Integrated Gradients method☆644Feb 23, 2022Updated 3 years ago
- Official Code Repo for the Paper: "How does This Interaction Affect Me? Interpretable Attribution for Feature Interactions", In NeurIPS 2…☆42Oct 31, 2022Updated 3 years ago
- Detect model's attention☆170Jul 2, 2020Updated 5 years ago
- ☆122Mar 15, 2022Updated 3 years ago
- A Diagnostic Study of Explainability Techniques for Text Classification☆69Oct 23, 2020Updated 5 years ago
- Notebooks for JHU EN 601.320/420/620☆10May 1, 2019Updated 6 years ago
- Unsupervised-Data-Augmentation-PyTorch☆12Dec 8, 2022Updated 3 years ago
- My practical approach to learning Neural Network concepts☆10Jun 11, 2019Updated 6 years ago
- Adversarial learning by utilizing model interpretation☆10Oct 19, 2018Updated 7 years ago
- code release for Representer point Selection for Explaining Deep Neural Network in NeurIPS 2018☆67Sep 13, 2021Updated 4 years ago
- Code for "Robustness May Be at Odds with Accuracy"☆91Mar 24, 2023Updated 2 years ago
- Influence Estimation for Gradient-Boosted Decision Trees☆29May 27, 2024Updated last year
- Tools for robustness evaluation in interpretability methods☆11Jun 25, 2021Updated 4 years ago
- A toolbox to iNNvestigate neural networks' predictions!☆1,306Apr 11, 2025Updated 10 months ago
- Model interpretability and understanding for PyTorch☆5,556Feb 3, 2026Updated last week
- A unified framework of perturbation and gradient-based attribution methods for Deep Neural Networks interpretability. DeepExplain also in…☆760Aug 25, 2020Updated 5 years ago
- Layer-wise Relevance Propagation (LRP) for LSTMs.☆226Apr 24, 2020Updated 5 years ago
- Codebase for "Deep Learning for Case-based Reasoning through Prototypes: A Neural Network that Explains Its Predictions" (to appear in AA…☆77Nov 21, 2017Updated 8 years ago
- To Trust Or Not To Trust A Classifier. A measure of uncertainty for any trained (possibly black-box) classifier which is more effective t…☆178Mar 23, 2023Updated 2 years ago