mndu / guided-feature-inversion
PyTorch code for KDD 18 paper: Towards Explanation of DNN-based Prediction with Guided Feature Inversion
☆21Updated 5 years ago
Related projects ⓘ
Alternatives and complementary repositories for guided-feature-inversion
- Code for AAAI 2018 accepted paper: "Improving the Adversarial Robustness and Interpretability of Deep Neural Networks by Regularizing the…☆54Updated last year
- An (imperfect) implementation of wide resnets and Parseval regularization☆8Updated 4 years ago
- CVPR'19 experiments with (on-manifold) adversarial examples.☆43Updated 4 years ago
- Code for the paper "Adversarial Training and Robustness for Multiple Perturbations", NeurIPS 2019☆46Updated last year
- ☆19Updated 3 years ago
- code we used in Decision Boundary Analysis of Adversarial Examples https://openreview.net/forum?id=BkpiPMbA-☆27Updated 6 years ago
- Interpretation of Neural Network is Fragile☆36Updated 6 months ago
- Code for the Adversarial Image Detectors and a Saliency Map☆12Updated 7 years ago
- Pre-Training Buys Better Robustness and Uncertainty Estimates (ICML 2019)☆99Updated 2 years ago
- A general method for training cost-sensitive robust classifier☆21Updated 5 years ago
- Codes for reproducing the white-box adversarial attacks in “EAD: Elastic-Net Attacks to Deep Neural Networks via Adversarial Examples,” …☆21Updated 6 years ago
- Adv-BNN: Improved Adversarial Defense through Robust Bayesian Neural Network☆61Updated 5 years ago
- Rob-GAN: Generator, Discriminator and Adversarial Attacker☆83Updated 5 years ago
- Code for paper "Dimensionality-Driven Learning with Noisy Labels" - ICML 2018☆58Updated 4 months ago
- ☆21Updated 4 years ago
- Formal Guarantees on the Robustness of a Classifier against Adversarial Manipulation [NeurIPS 2017]☆18Updated 6 years ago
- An Algorithm to Quantify Robustness of Recurrent Neural Networks☆46Updated 4 years ago
- Reverse Cross Entropy for Adversarial Detection (NeurIPS 2018)☆45Updated 3 years ago
- ☆18Updated 5 years ago
- Implementation for What it Thinks is Important is Important: Robustness Transfers through Input Gradients (CVPR 2020 Oral)☆16Updated last year
- ☆11Updated 4 years ago
- Implementation of our NeurIPS 2018 paper: Deep Defense: Training DNNs with Improved Adversarial Robustness☆39Updated 5 years ago
- Repository for our ICCV 2019 paper: Adversarial Defense via Learning to Generate Diverse Attacks☆21Updated 3 years ago
- Interpretable Explanations of Black Boxes by Meaningful Perturbation Pytorch☆12Updated 2 months ago
- ☆25Updated 5 years ago
- [ECCV 2018] Towards Privacy-Preserving Visual Recognition via Adversarial Training: A Pilot Study☆38Updated 2 years ago
- Adversarial learning by utilizing model interpretation☆10Updated 6 years ago
- Code for Stability Training with Noise (STN)☆21Updated 3 years ago
- Project page for our paper: Interpreting Adversarially Trained Convolutional Neural Networks☆64Updated 5 years ago
- Interval attacks (adversarial ML)☆21Updated 5 years ago