fartashf / under_convnetLinks
Caffe code for the paper "Adversarial Manipulation of Deep Representations"
☆17Updated 8 years ago
Alternatives and similar repositories for under_convnet
Users that are interested in under_convnet are comparing it to the libraries listed below
Sorting:
- Investigating the robustness of state-of-the-art CNN architectures to simple spatial transformations.☆49Updated 6 years ago
- Formal Guarantees on the Robustness of a Classifier against Adversarial Manipulation [NeurIPS 2017]☆18Updated 7 years ago
- Code for the unrestricted adversarial examples paper (NeurIPS 2018)☆65Updated 6 years ago
- Provable Robustness of ReLU networks via Maximization of Linear Regions [AISTATS 2019]☆31Updated 5 years ago
- Code for AAAI 2018 accepted paper: "Improving the Adversarial Robustness and Interpretability of Deep Neural Networks by Regularizing the…☆55Updated 3 years ago
- Adversarial Images for Variational Autoencoders☆13Updated 9 years ago
- Analysis of Adversarial Logit Pairing☆60Updated 7 years ago
- Generalized Data-free Universal Adversarial Perturbations☆73Updated 7 years ago
- code we used in Decision Boundary Analysis of Adversarial Examples https://openreview.net/forum?id=BkpiPMbA-☆29Updated 7 years ago
- Ensemble Adversarial Training on MNIST☆121Updated 8 years ago
- Data independent universal adversarial perturbations☆63Updated 5 years ago
- ☆20Updated 5 years ago
- Coupling rejection strategy against adversarial attacks (CVPR 2022)☆29Updated 3 years ago
- Official TensorFlow Implementation of Adversarial Training for Free! which trains robust models at no extra cost compared to natural trai…☆177Updated last year
- CLEVER (Cross-Lipschitz Extreme Value for nEtwork Robustness) is a robustness metric for deep neural networks☆63Updated 4 years ago
- Code corresponding to the paper "Adversarial Examples are not Easily Detected..."☆89Updated 8 years ago
- Code for the Paper 'On the Connection Between Adversarial Robustness and Saliency Map Interpretability' by C. Etmann, S. Lunz, P. Maass, …☆16Updated 6 years ago
- Provably defending pretrained classifiers including the Azure, Google, AWS, and Clarifai APIs☆100Updated 4 years ago
- Code for Stability Training with Noise (STN)☆22Updated 5 years ago
- Adversarial Defense by Restricting the Hidden Space of Deep Neural Networks, in ICCV 2019☆58Updated 6 years ago
- Max Mahalanobis Training (ICML 2018 + ICLR 2020)☆90Updated 5 years ago
- Public repo for transferability ICLR 2017 paper☆52Updated 7 years ago
- Pre-Training Buys Better Robustness and Uncertainty Estimates (ICML 2019)☆100Updated 3 years ago
- NIPS 2017 - Adversarial Learning☆35Updated 8 years ago
- Code release for the ICML 2019 paper "Are generative classifiers more robust to adversarial attacks?"☆23Updated 6 years ago
- Smooth Adversarial Training☆68Updated 5 years ago
- StrAttack, ICLR 2019☆33Updated 6 years ago
- Pytorch Adversarial Attack Framework☆78Updated 6 years ago
- ☆48Updated 4 years ago
- ☆18Updated 6 years ago