pkmr06 / pytorch-smoothgrad
SmoothGrad implementation in PyTorch
☆168Updated 3 years ago
Related projects ⓘ
Alternatives and complementary repositories for pytorch-smoothgrad
- PyTorch implementation of Interpretable Explanations of Black Boxes by Meaningful Perturbation☆334Updated 2 years ago
- Pytorch Implementation of recent visual attribution methods for model interpretability☆145Updated 4 years ago
- Notebooks for reproducing the paper "Computer Vision with a Single (Robust) Classifier"☆127Updated 5 years ago
- ☆109Updated 2 years ago
- Real-time image saliency 🌠 (NIPS 2017)☆126Updated 6 years ago
- Principled Detection of Out-of-Distribution Examples in Neural Networks☆201Updated 7 years ago
- Light version of Network Dissection for Quantifying Interpretability of Networks☆216Updated 5 years ago
- Code for "Robustness May Be at Odds with Accuracy"☆93Updated last year
- IBD: Interpretable Basis Decomposition for Visual Explanation☆51Updated 5 years ago
- Code for "Learning Perceptually-Aligned Representations via Adversarial Robustness"☆159Updated 4 years ago
- Visualizing how deep networks make decisions☆66Updated 5 years ago
- ☆48Updated 4 years ago
- Example code for the paper "Understanding deep learning requires rethinking generalization"☆177Updated 4 years ago
- Variational Autoencoder implemented with PyTorch, Trained over CelebA Dataset☆167Updated 7 years ago
- Code for Fong and Vedaldi 2017, "Interpretable Explanations of Black Boxes by Meaningful Perturbation"☆30Updated 5 years ago
- A PyTorch baseline attack example for the NIPS 2017 adversarial competition☆85Updated 7 years ago
- OD-test: A Less Biased Evaluation of Out-of-Distribution (Outlier) Detectors (PyTorch)☆62Updated last year
- This repo provides code used in the paper "Predicting with High Correlation Features" (https://arxiv.org/abs/1910.00164):☆54Updated 5 years ago
- Understanding Deep Networks via Extremal Perturbations and Smooth Masks☆344Updated 4 years ago
- Release of CIFAR-10.1, a new test set for CIFAR-10.☆220Updated 4 years ago
- A pytorch implementation of our jacobian regularizer to encourage learning representations more robust to input perturbations.☆123Updated last year
- Pytorch implementation of Real Time Image Saliency for Black Box Classifiers https://arxiv.org/abs/1705.07857☆58Updated 5 years ago
- Project page for our paper: Interpreting Adversarially Trained Convolutional Neural Networks☆64Updated 5 years ago
- Official repository for "Why are Saliency Maps Noisy? Cause of and Solution to Noisy Saliency Maps".☆33Updated 5 years ago
- Code for "Testing Robustness Against Unforeseen Adversaries"☆80Updated 3 months ago
- Adversarial Defense by Restricting the Hidden Space of Deep Neural Networks, in ICCV 2019☆59Updated 5 years ago
- Code for our CVPR 2018 paper, "On the Robustness of Semantic Segmentation Models to Adversarial Attacks"☆101Updated 5 years ago
- Data, code & materials from the paper "Generalisation in humans and deep neural networks" (NeurIPS 2018)☆95Updated last year
- Pre-Training Buys Better Robustness and Uncertainty Estimates (ICML 2019)☆99Updated 2 years ago
- code release for Representer point Selection for Explaining Deep Neural Network in NeurIPS 2018☆67Updated 3 years ago