hendrycks / fooling
Code for the Adversarial Image Detectors and a Saliency Map
☆12Updated 7 years ago
Related projects ⓘ
Alternatives and complementary repositories for fooling
- Code release for the ICML 2019 paper "Are generative classifiers more robust to adversarial attacks?"☆23Updated 5 years ago
- Provable Robustness of ReLU networks via Maximization of Linear Regions [AISTATS 2019]☆31Updated 4 years ago
- ☆21Updated 4 years ago
- Code for AAAI 2018 accepted paper: "Improving the Adversarial Robustness and Interpretability of Deep Neural Networks by Regularizing the…☆54Updated last year
- Code for the paper "Adversarial Training and Robustness for Multiple Perturbations", NeurIPS 2019☆46Updated last year
- ☆12Updated 5 years ago
- Formal Guarantees on the Robustness of a Classifier against Adversarial Manipulation [NeurIPS 2017]☆18Updated 6 years ago
- Logit Pairing Methods Can Fool Gradient-Based Attacks [NeurIPS 2018 Workshop on Security in Machine Learning]☆18Updated 5 years ago
- ☆18Updated 5 years ago
- Code for Stability Training with Noise (STN)☆21Updated 3 years ago
- Code for the Paper 'On the Connection Between Adversarial Robustness and Saliency Map Interpretability' by C. Etmann, S. Lunz, P. Maass, …☆16Updated 5 years ago
- An (imperfect) implementation of wide resnets and Parseval regularization☆8Updated 4 years ago
- ☆19Updated 3 years ago
- Implementation for What it Thinks is Important is Important: Robustness Transfers through Input Gradients (CVPR 2020 Oral)☆16Updated last year
- Code for the paper: Learning Adversarially Robust Representations via Worst-Case Mutual Information Maximization (https://arxiv.org/abs/2…☆23Updated 3 years ago
- Research prototype of deletion efficient k-means algorithms☆23Updated 4 years ago
- code we used in Decision Boundary Analysis of Adversarial Examples https://openreview.net/forum?id=BkpiPMbA-☆27Updated 6 years ago
- Analysis of Adversarial Logit Pairing☆60Updated 6 years ago
- ☆11Updated 4 years ago
- Code for the paper "Robustness Certificates for Sparse Adversarial Attacks by Randomized Ablation" by Alexander Levine and Soheil Feizi.☆10Updated 2 years ago
- A general method for training cost-sensitive robust classifier☆21Updated 5 years ago
- ☆29Updated 5 years ago
- Unofficial implementation of the paper 'Adversarial Training for Free'☆21Updated 5 years ago
- CVPR'19 experiments with (on-manifold) adversarial examples.☆44Updated 4 years ago
- ☆35Updated 3 years ago
- On the effectiveness of adversarial training against common corruptions [UAI 2022]☆30Updated 2 years ago
- Code for "Prior Convictions: Black-Box Adversarial Attacks with Bandits and Priors"☆61Updated 4 years ago
- Code for "Prior Convictions: Black-box Adversarial Attacks with Bandits and Priors"☆14Updated 6 years ago
- Interval attacks (adversarial ML)☆21Updated 5 years ago
- ☆25Updated 5 years ago