ruthcfong / perturb_explanationsLinks
Code for Fong and Vedaldi 2017, "Interpretable Explanations of Black Boxes by Meaningful Perturbation"
☆31Updated 6 years ago
Alternatives and similar repositories for perturb_explanations
Users that are interested in perturb_explanations are comparing it to the libraries listed below
Sorting:
- Code for Net2Vec: Quantifying and Explaining how Concepts are Encoded by Filters in Deep Neural Networks☆30Updated 7 years ago
- Provable Robustness of ReLU networks via Maximization of Linear Regions [AISTATS 2019]☆31Updated 5 years ago
- SmoothGrad implementation in PyTorch☆172Updated 4 years ago
- Official repository for "Bridging Adversarial Robustness and Gradient Interpretability".☆30Updated 6 years ago
- IBD: Interpretable Basis Decomposition for Visual Explanation☆52Updated 7 years ago
- Related materials for robust and explainable machine learning☆48Updated 7 years ago
- Explaining Image Classifiers by Counterfactual Generation☆28Updated 3 years ago
- OD-test: A Less Biased Evaluation of Out-of-Distribution (Outlier) Detectors (PyTorch)☆62Updated 2 years ago
- ☆51Updated 5 years ago
- Pre-Training Buys Better Robustness and Uncertainty Estimates (ICML 2019)☆100Updated 3 years ago
- Pytorch Implementation of recent visual attribution methods for model interpretability☆146Updated 5 years ago
- Investigating the robustness of state-of-the-art CNN architectures to simple spatial transformations.☆49Updated 6 years ago
- Training Confidence-Calibrated Classifier for Detecting Out-of-Distribution Samples / ICLR 2018☆182Updated 5 years ago
- ☆112Updated 3 years ago
- Formal Guarantees on the Robustness of a Classifier against Adversarial Manipulation [NeurIPS 2017]☆18Updated 7 years ago
- A PyTorch baseline attack example for the NIPS 2017 adversarial competition☆86Updated 8 years ago
- Data, code & materials from the paper "Generalisation in humans and deep neural networks" (NeurIPS 2018)☆95Updated 2 years ago
- The Ultimate Reference for Out of Distribution Detection with Deep Neural Networks☆118Updated 5 years ago
- Code for the Paper 'On the Connection Between Adversarial Robustness and Saliency Map Interpretability' by C. Etmann, S. Lunz, P. Maass, …☆16Updated 6 years ago
- Principled Detection of Out-of-Distribution Examples in Neural Networks☆202Updated 8 years ago
- Interpretation of Neural Network is Fragile☆36Updated last year
- Notebooks for reproducing the paper "Computer Vision with a Single (Robust) Classifier"☆129Updated 6 years ago
- Overcoming Catastrophic Forgetting by Incremental Moment Matching (IMM)☆35Updated 8 years ago
- ☆11Updated 6 years ago
- Code for "Robustness May Be at Odds with Accuracy"☆91Updated 2 years ago
- Official repository for "Why are Saliency Maps Noisy? Cause of and Solution to Noisy Saliency Maps".☆34Updated 6 years ago
- Computing various norms/measures on over-parametrized neural networks☆50Updated 7 years ago
- ☆34Updated 7 years ago
- NIPS Adversarial Vision Challenge☆41Updated 7 years ago
- [ECCV 2018] code for Choose Your Neuron: Incorporating Domain Knowledge Through Neuron Importance☆57Updated 7 years ago