☆113Nov 21, 2022Updated 3 years ago
Alternatives and similar repositories for sanity_checks_saliency
Users that are interested in sanity_checks_saliency are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- reference implementation for "explanations can be manipulated and geometry is to blame"☆37Jul 24, 2022Updated 3 years ago
- This repository contains the code for implementing Bidirectional Relevance scores for Digital Histopathology, which was used for the resu…☆16Mar 24, 2023Updated 3 years ago
- ☆51Aug 29, 2020Updated 5 years ago
- IBD: Interpretable Basis Decomposition for Visual Explanation☆52Nov 28, 2018Updated 7 years ago
- Understanding Deep Networks via Extremal Perturbations and Smooth Masks☆349Jul 22, 2020Updated 5 years ago
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click and start building anything your business needs.
- Code for AAAI 2018 accepted paper: "Improving the Adversarial Robustness and Interpretability of Deep Neural Networks by Regularizing the…☆55Dec 4, 2022Updated 3 years ago
- Python implementation for evaluating explanations presented in "On the (In)fidelity and Sensitivity for Explanations" in NeurIPS 2019 for…☆25Feb 23, 2022Updated 4 years ago
- code release for the paper "On Completeness-aware Concept-Based Explanations in Deep Neural Networks"☆53Mar 25, 2022Updated 4 years ago
- Code for Fong and Vedaldi 2017, "Interpretable Explanations of Black Boxes by Meaningful Perturbation"☆32Sep 25, 2019Updated 6 years ago
- Interpretation of Neural Network is Fragile☆37May 1, 2024Updated last year
- Do input gradients highlight discriminative features? [NeurIPS 2021] (https://arxiv.org/abs/2102.12781)☆12Jan 10, 2023Updated 3 years ago
- This repository provides a PyTorch implementation of "Fooling Neural Network Interpretations via Adversarial Model Manipulation". Our pap…☆23Dec 19, 2020Updated 5 years ago
- Python code for tree ensemble interpretation☆86Jan 20, 2021Updated 5 years ago
- There and Back Again: Revisiting Backpropagation Saliency Methods (CVPR 2020)☆53Apr 7, 2020Updated 5 years ago
- NordVPN Threat Protection Pro™ • AdTake your cybersecurity to the next level. Block phishing, malware, trackers, and ads. Lightweight app that works with all browsers.
- Attributing predictions made by the Inception network using the Integrated Gradients method☆646Feb 23, 2022Updated 4 years ago
- Interval attacks (adversarial ML)☆21Jun 17, 2019Updated 6 years ago
- Model interpretability and understanding for PyTorch☆5,584Mar 21, 2026Updated last week
- This repository provides a summarization of recent empirical studies/human studies that measure human understanding with machine explanat…☆14Jul 24, 2024Updated last year
- Overcoming Catastrophic Forgetting by Incremental Moment Matching (IMM)☆35Dec 27, 2017Updated 8 years ago
- A lightweight implementation of removal-based explanations for ML models.☆59Jul 19, 2021Updated 4 years ago
- Towards Automatic Concept-based Explanations☆163May 1, 2024Updated last year
- A unified framework of perturbation and gradient-based attribution methods for Deep Neural Networks interpretability. DeepExplain also in…☆760Aug 25, 2020Updated 5 years ago
- A toolbox to iNNvestigate neural networks' predictions!☆1,307Apr 11, 2025Updated 11 months ago
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click and start building anything your business needs.
- Provable Robustness of ReLU networks via Maximization of Linear Regions [AISTATS 2019]☆31Jul 15, 2020Updated 5 years ago
- Explaining Image Classifiers by Counterfactual Generation☆28Apr 23, 2022Updated 3 years ago
- ☆123Mar 15, 2022Updated 4 years ago
- To Trust Or Not To Trust A Classifier. A measure of uncertainty for any trained (possibly black-box) classifier which is more effective t…☆181Mar 23, 2023Updated 3 years ago
- code release for Representer point Selection for Explaining Deep Neural Network in NeurIPS 2018☆67Sep 13, 2021Updated 4 years ago
- Code for☆15Oct 16, 2020Updated 5 years ago
- [ICCV 2023] Evaluation and Improvement of Interpretability for Self-Explainable Part-Prototype Networks☆19Oct 12, 2023Updated 2 years ago
- Layer-wise Relevance Propagation (LRP) for LSTMs.☆225Apr 24, 2020Updated 5 years ago
- Codes for reproducing the contrastive explanation in “Explanations based on the Missing: Towards Contrastive Explanations with Pertinent…☆54Jul 4, 2018Updated 7 years ago
- NordVPN Special Discount Offer • AdSave on top-rated NordVPN 1 or 2-year plans with secure browsing, privacy protection, and support for for all major platforms.
- A Diagnostic Study of Explainability Techniques for Text Classification☆70Oct 23, 2020Updated 5 years ago
- OCEAN: Optimal Counterfactual Explanations in Tree Ensembles (ICML 2021)☆35Feb 16, 2026Updated last month
- Fortifying Toxic Speech Detectors Against Veiled Toxicity☆11Oct 21, 2020Updated 5 years ago
- Tensorflow 2.1 implementation of LRP for LSTMs☆40Jan 9, 2023Updated 3 years ago
- Influence Estimation for Gradient-Boosted Decision Trees☆29May 27, 2024Updated last year
- The official code release for Unsupervised Out-of-distribution Detection with Diffusion Inpainting (ICML 2023)☆28Aug 16, 2023Updated 2 years ago
- Code for GLAT (Global Local Transformer), ECCV 2020 "Learning Visual Commonsense for Robust Scene Graph Generation"☆11Dec 16, 2020Updated 5 years ago