ruthcfong / perturb_explanationsLinks
Code for Fong and Vedaldi 2017, "Interpretable Explanations of Black Boxes by Meaningful Perturbation"
☆31Updated 5 years ago
Alternatives and similar repositories for perturb_explanations
Users that are interested in perturb_explanations are comparing it to the libraries listed below
Sorting:
- Code for Net2Vec: Quantifying and Explaining how Concepts are Encoded by Filters in Deep Neural Networks☆31Updated 7 years ago
- Official repository for "Bridging Adversarial Robustness and Gradient Interpretability".☆30Updated 6 years ago
- Explaining Image Classifiers by Counterfactual Generation☆28Updated 3 years ago
- SmoothGrad implementation in PyTorch☆172Updated 4 years ago
- OD-test: A Less Biased Evaluation of Out-of-Distribution (Outlier) Detectors (PyTorch)☆62Updated last year
- Related materials for robust and explainable machine learning☆48Updated 7 years ago
- Overcoming Catastrophic Forgetting by Incremental Moment Matching (IMM)☆35Updated 7 years ago
- IBD: Interpretable Basis Decomposition for Visual Explanation☆52Updated 6 years ago
- ☆112Updated 2 years ago
- Provable Robustness of ReLU networks via Maximization of Linear Regions [AISTATS 2019]☆32Updated 5 years ago
- ☆51Updated 4 years ago
- Pytorch Implementation of recent visual attribution methods for model interpretability☆146Updated 5 years ago
- Principled Detection of Out-of-Distribution Examples in Neural Networks☆202Updated 8 years ago
- Computing various norms/measures on over-parametrized neural networks☆49Updated 6 years ago
- Pre-Training Buys Better Robustness and Uncertainty Estimates (ICML 2019)☆100Updated 3 years ago
- Data, code & materials from the paper "Generalisation in humans and deep neural networks" (NeurIPS 2018)☆96Updated last year
- Release of CIFAR-10.1, a new test set for CIFAR-10.☆223Updated 5 years ago
- Real-time image saliency 🌠 (NIPS 2017)☆125Updated 7 years ago
- pytorch implementation of SOSELETO☆15Updated 5 years ago
- ☆66Updated 6 years ago
- Interpretation of Neural Network is Fragile☆36Updated last year
- Notebooks for reproducing the paper "Computer Vision with a Single (Robust) Classifier"☆128Updated 5 years ago
- The Ultimate Reference for Out of Distribution Detection with Deep Neural Networks☆118Updated 5 years ago
- Code for Piggyback: Adapting a Single Network to Multiple Tasks by Learning to Mask Weights☆182Updated 6 years ago
- Investigating the robustness of state-of-the-art CNN architectures to simple spatial transformations.☆49Updated 5 years ago
- Training Confidence-Calibrated Classifier for Detecting Out-of-Distribution Samples / ICLR 2018☆182Updated 5 years ago
- ☆34Updated 6 years ago
- [ECCV 2018] code for Choose Your Neuron: Incorporating Domain Knowledge Through Neuron Importance☆57Updated 6 years ago
- Code for the paper 'Understanding Measures of Uncertainty for Adversarial Example Detection'☆61Updated 7 years ago
- A DIRT-T Approach to Unsupervised Domain Adaptation (ICLR 2018)☆175Updated 7 years ago