mndu / guided-feature-inversion
PyTorch code for KDD 18 paper: Towards Explanation of DNN-based Prediction with Guided Feature Inversion
☆21Updated 6 years ago
Alternatives and similar repositories for guided-feature-inversion
Users that are interested in guided-feature-inversion are comparing it to the libraries listed below
Sorting:
- Code for AAAI 2018 accepted paper: "Improving the Adversarial Robustness and Interpretability of Deep Neural Networks by Regularizing the…☆55Updated 2 years ago
- Adv-BNN: Improved Adversarial Defense through Robust Bayesian Neural Network☆62Updated 5 years ago
- CVPR'19 experiments with (on-manifold) adversarial examples.☆45Updated 5 years ago
- An (imperfect) implementation of wide resnets and Parseval regularization☆9Updated 5 years ago
- ☆19Updated 3 years ago
- Interpretation of Neural Network is Fragile☆36Updated last year
- code we used in Decision Boundary Analysis of Adversarial Examples https://openreview.net/forum?id=BkpiPMbA-☆27Updated 6 years ago
- An Algorithm to Quantify Robustness of Recurrent Neural Networks☆48Updated 5 years ago
- Code for the paper "Adversarial Training and Robustness for Multiple Perturbations", NeurIPS 2019☆47Updated 2 years ago
- Reverse Cross Entropy for Adversarial Detection (NeurIPS 2018)☆45Updated 4 years ago
- Project page for our paper: Interpreting Adversarially Trained Convolutional Neural Networks☆66Updated 5 years ago
- ☆25Updated 5 years ago
- ☆18Updated 5 years ago
- This repository is for NeurIPS 2018 spotlight paper "Attacks Meet Interpretability: Attribute-steered Detection of Adversarial Samples."☆31Updated 3 years ago
- Code for the paper "Robustness Certificates for Sparse Adversarial Attacks by Randomized Ablation" by Alexander Levine and Soheil Feizi.☆10Updated 2 years ago
- Rob-GAN: Generator, Discriminator and Adversarial Attacker☆85Updated 6 years ago
- Adversarial learning by utilizing model interpretation☆10Updated 6 years ago
- A method based on manifold regularization for training adversarially robust neural networks☆9Updated 5 years ago
- Implementation for What it Thinks is Important is Important: Robustness Transfers through Input Gradients (CVPR 2020 Oral)☆16Updated 2 years ago
- Interval attacks (adversarial ML)☆21Updated 5 years ago
- Adversarial Defense for Ensemble Models (ICML 2019)☆61Updated 4 years ago
- Code for NeurIPS 2019 Paper☆47Updated 4 years ago
- Pre-Training Buys Better Robustness and Uncertainty Estimates (ICML 2019)☆100Updated 3 years ago
- Max Mahalanobis Training (ICML 2018 + ICLR 2020)☆90Updated 4 years ago
- A general method for training cost-sensitive robust classifier☆22Updated 5 years ago
- Code for paper "Dimensionality-Driven Learning with Noisy Labels" - ICML 2018☆58Updated 11 months ago
- Latent Space Virtual Adversarial Training (ECCV 2020)☆17Updated 4 years ago
- Formal Guarantees on the Robustness of a Classifier against Adversarial Manipulation [NeurIPS 2017]☆18Updated 7 years ago
- Further improve robustness of mixup-trained models in inference (ICLR 2020)☆60Updated 4 years ago
- Code release for the ICML 2019 paper "Are generative classifiers more robust to adversarial attacks?"☆23Updated 6 years ago