ruthcfong / perturb_explanationsLinks
Code for Fong and Vedaldi 2017, "Interpretable Explanations of Black Boxes by Meaningful Perturbation"
☆31Updated 6 years ago
Alternatives and similar repositories for perturb_explanations
Users that are interested in perturb_explanations are comparing it to the libraries listed below
Sorting:
- Code for Net2Vec: Quantifying and Explaining how Concepts are Encoded by Filters in Deep Neural Networks☆30Updated 7 years ago
- OD-test: A Less Biased Evaluation of Out-of-Distribution (Outlier) Detectors (PyTorch)☆62Updated 2 years ago
- SmoothGrad implementation in PyTorch☆172Updated 4 years ago
- IBD: Interpretable Basis Decomposition for Visual Explanation☆52Updated 7 years ago
- Related materials for robust and explainable machine learning☆48Updated 7 years ago
- Official repository for "Bridging Adversarial Robustness and Gradient Interpretability".☆30Updated 6 years ago
- ☆51Updated 5 years ago
- Explaining Image Classifiers by Counterfactual Generation☆28Updated 3 years ago
- Principled Detection of Out-of-Distribution Examples in Neural Networks☆202Updated 8 years ago
- Data, code & materials from the paper "Generalisation in humans and deep neural networks" (NeurIPS 2018)☆94Updated 2 years ago
- The Ultimate Reference for Out of Distribution Detection with Deep Neural Networks☆118Updated 6 years ago
- ☆113Updated 3 years ago
- Pytorch Implementation of recent visual attribution methods for model interpretability☆146Updated 5 years ago
- Training Confidence-Calibrated Classifier for Detecting Out-of-Distribution Samples / ICLR 2018☆182Updated 5 years ago
- Official repository for "Why are Saliency Maps Noisy? Cause of and Solution to Noisy Saliency Maps".☆34Updated 6 years ago
- Code for the paper 'Understanding Measures of Uncertainty for Adversarial Example Detection'☆62Updated 7 years ago
- Pre-Training Buys Better Robustness and Uncertainty Estimates (ICML 2019)☆100Updated 3 years ago
- ☆70Updated 6 years ago
- Real-time image saliency 🌠 (NIPS 2017)☆126Updated 7 years ago
- Computing various norms/measures on over-parametrized neural networks☆50Updated 7 years ago
- Code for using CDEP from the paper "Interpretations are useful: penalizing explanations to align neural networks with prior knowledge" ht…☆128Updated 4 years ago
- [ECCV 2018] code for Choose Your Neuron: Incorporating Domain Knowledge Through Neuron Importance☆57Updated 7 years ago
- Overcoming Catastrophic Forgetting by Incremental Moment Matching (IMM)☆35Updated 8 years ago
- ☆11Updated 6 years ago
- Provable Robustness of ReLU networks via Maximization of Linear Regions [AISTATS 2019]☆31Updated 5 years ago
- Implementation of Information Dropout☆39Updated 8 years ago
- Release of CIFAR-10.1, a new test set for CIFAR-10.☆225Updated 5 years ago
- Code for "Testing Robustness Against Unforeseen Adversaries"☆80Updated last year
- Pytorch implementation of Real Time Image Saliency for Black Box Classifiers https://arxiv.org/abs/1705.07857☆59Updated 6 years ago
- ☆34Updated 7 years ago