ruthcfong / perturb_explanations
Code for Fong and Vedaldi 2017, "Interpretable Explanations of Black Boxes by Meaningful Perturbation"
☆30Updated 5 years ago
Alternatives and similar repositories for perturb_explanations:
Users that are interested in perturb_explanations are comparing it to the libraries listed below
- Code for Net2Vec: Quantifying and Explaining how Concepts are Encoded by Filters in Deep Neural Networks☆30Updated 7 years ago
- ☆51Updated 4 years ago
- OD-test: A Less Biased Evaluation of Out-of-Distribution (Outlier) Detectors (PyTorch)☆62Updated last year
- Official repository for "Bridging Adversarial Robustness and Gradient Interpretability".☆30Updated 6 years ago
- Explaining Image Classifiers by Counterfactual Generation☆28Updated 3 years ago
- Provable Robustness of ReLU networks via Maximization of Linear Regions [AISTATS 2019]☆32Updated 4 years ago
- Pre-Training Buys Better Robustness and Uncertainty Estimates (ICML 2019)☆100Updated 3 years ago
- Official repository for "Why are Saliency Maps Noisy? Cause of and Solution to Noisy Saliency Maps".☆34Updated 5 years ago
- ☆109Updated 2 years ago
- [ECCV 2018] code for Choose Your Neuron: Incorporating Domain Knowledge Through Neuron Importance☆57Updated 6 years ago
- Code for the CVPR 2021 paper: Understanding Failures of Deep Networks via Robust Feature Extraction☆35Updated 2 years ago
- SmoothGrad implementation in PyTorch☆171Updated 4 years ago
- ☆34Updated 6 years ago
- Python implementation for evaluating explanations presented in "On the (In)fidelity and Sensitivity for Explanations" in NeurIPS 2019 for…☆25Updated 3 years ago
- code release for the paper "On Completeness-aware Concept-Based Explanations in Deep Neural Networks"☆53Updated 3 years ago
- code release for Representer point Selection for Explaining Deep Neural Network in NeurIPS 2018☆67Updated 3 years ago
- Interpretation of Neural Network is Fragile☆36Updated last year
- IBD: Interpretable Basis Decomposition for Visual Explanation☆52Updated 6 years ago
- Investigating the robustness of state-of-the-art CNN architectures to simple spatial transformations.☆49Updated 5 years ago
- Code for the Paper 'On the Connection Between Adversarial Robustness and Saliency Map Interpretability' by C. Etmann, S. Lunz, P. Maass, …☆16Updated 5 years ago
- Related materials for robust and explainable machine learning☆48Updated 7 years ago
- Code to replicate "Generating Visual Explanations"☆49Updated 4 years ago
- A PyTorch baseline attack example for the NIPS 2017 adversarial competition☆85Updated 7 years ago
- Computing various norms/measures on over-parametrized neural networks☆49Updated 6 years ago
- ☆34Updated 3 years ago
- ☆61Updated 2 years ago
- Repository of code for the experiments for the ICLR submission "An Empirical Investigation of Catastrophic Forgetting in Gradient-Based N…☆68Updated 11 years ago
- ☆55Updated 4 years ago
- Computing various measures and generalization bounds on convolutional and fully connected networks☆35Updated 6 years ago
- Adversarially Robust Neural Network on MNIST.☆64Updated 3 years ago