☆113Nov 21, 2022Updated 3 years ago
Alternatives and similar repositories for sanity_checks_saliency
Users that are interested in sanity_checks_saliency are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Showing the relationship between ImageNet ID and labels and pytorch pre-trained model output ID and labels☆10Oct 11, 2020Updated 5 years ago
- This repository contains the code for implementing Bidirectional Relevance scores for Digital Histopathology, which was used for the resu…☆16Mar 24, 2023Updated 3 years ago
- ☆52Aug 29, 2020Updated 5 years ago
- IBD: Interpretable Basis Decomposition for Visual Explanation☆52Nov 28, 2018Updated 7 years ago
- Repository of the paper "Defining Locality for Surrogates in Post-hoc Interpretablity" published at 2018 ICML Workshop on Human Interpret…☆17Nov 9, 2021Updated 4 years ago
- Wordpress hosting with auto-scaling - Free Trial • AdFully Managed hosting for WordPress and WooCommerce businesses that need reliable, auto-scalable performance. Cloudways SafeUpdates now available.
- Code for the Paper 'On the Connection Between Adversarial Robustness and Saliency Map Interpretability' by C. Etmann, S. Lunz, P. Maass, …☆16May 9, 2019Updated 6 years ago
- Framework-agnostic implementation for state-of-the-art saliency methods (XRAI, BlurIG, SmoothGrad, and more).☆995Mar 20, 2024Updated 2 years ago
- Code for the TCAV ML interpretability project☆653Feb 5, 2026Updated 2 months ago
- Understanding Deep Networks via Extremal Perturbations and Smooth Masks☆350Jul 22, 2020Updated 5 years ago
- Code for AAAI 2018 accepted paper: "Improving the Adversarial Robustness and Interpretability of Deep Neural Networks by Regularizing the…☆55Dec 4, 2022Updated 3 years ago
- Python implementation for evaluating explanations presented in "On the (In)fidelity and Sensitivity for Explanations" in NeurIPS 2019 for…☆25Feb 23, 2022Updated 4 years ago
- Code for Fong and Vedaldi 2017, "Interpretable Explanations of Black Boxes by Meaningful Perturbation"☆32Sep 25, 2019Updated 6 years ago
- This repository provides a PyTorch implementation of "Fooling Neural Network Interpretations via Adversarial Model Manipulation". Our pap…☆23Dec 19, 2020Updated 5 years ago
- Python code for tree ensemble interpretation☆86Jan 20, 2021Updated 5 years ago
- Simple, predictable pricing with DigitalOcean hosting • AdAlways know what you'll pay with monthly caps and flat pricing. Enterprise-grade infrastructure trusted by 600k+ customers.
- There and Back Again: Revisiting Backpropagation Saliency Methods (CVPR 2020)☆53Apr 7, 2020Updated 6 years ago
- Attributing predictions made by the Inception network using the Integrated Gradients method☆648Feb 23, 2022Updated 4 years ago
- Interval attacks (adversarial ML)☆21Jun 17, 2019Updated 6 years ago
- This repository provides a summarization of recent empirical studies/human studies that measure human understanding with machine explanat…☆14Jul 24, 2024Updated last year
- Model interpretability and understanding for PyTorch☆5,600Updated this week
- Overcoming Catastrophic Forgetting by Incremental Moment Matching (IMM)☆35Dec 27, 2017Updated 8 years ago
- A lightweight implementation of removal-based explanations for ML models.☆59Jul 19, 2021Updated 4 years ago
- Towards Automatic Concept-based Explanations☆163May 1, 2024Updated last year
- A unified framework of perturbation and gradient-based attribution methods for Deep Neural Networks interpretability. DeepExplain also in…☆758Aug 25, 2020Updated 5 years ago
- Managed hosting for WordPress and PHP on Cloudways • AdManaged hosting for WordPress, Magento, Laravel, or PHP apps, on multiple cloud providers. Deploy in minutes on Cloudways by DigitalOcean.
- A toolbox to iNNvestigate neural networks' predictions!☆1,306Apr 11, 2025Updated last year
- Explaining Image Classifiers by Counterfactual Generation☆28Apr 23, 2022Updated 3 years ago
- ☆124Mar 15, 2022Updated 4 years ago
- To Trust Or Not To Trust A Classifier. A measure of uncertainty for any trained (possibly black-box) classifier which is more effective t…☆180Mar 23, 2023Updated 3 years ago
- SmoothGrad implementation in PyTorch☆172Apr 4, 2021Updated 5 years ago
- Code for "Robustness May Be at Odds with Accuracy"☆90Mar 24, 2023Updated 3 years ago
- code release for Representer point Selection for Explaining Deep Neural Network in NeurIPS 2018☆67Sep 13, 2021Updated 4 years ago
- Layer-wise Relevance Propagation (LRP) for LSTMs.☆225Apr 24, 2020Updated 5 years ago
- A Diagnostic Study of Explainability Techniques for Text Classification☆70Oct 23, 2020Updated 5 years ago
- Serverless GPU API endpoints on Runpod - Bonus Credits • AdSkip the infrastructure headaches. Auto-scaling, pay-as-you-go, no-ops approach lets you focus on innovating your application.
- OCEAN: Optimal Counterfactual Explanations in Tree Ensembles (ICML 2021)☆36Updated this week
- Fortifying Toxic Speech Detectors Against Veiled Toxicity☆11Oct 21, 2020Updated 5 years ago
- Code for our paper "Visualizing and Understanding Atari Agents" (https://goo.gl/AMAoSc)☆126Oct 21, 2021Updated 4 years ago
- Tensorflow 2.1 implementation of LRP for LSTMs☆40Jan 9, 2023Updated 3 years ago
- Influence Estimation for Gradient-Boosted Decision Trees☆29May 27, 2024Updated last year
- The official code release for Unsupervised Out-of-distribution Detection with Diffusion Inpainting (ICML 2023)☆28Aug 16, 2023Updated 2 years ago
- Preprint/draft article/blog on some explainable machine learning misconceptions. WIP!☆29Jul 13, 2019Updated 6 years ago