Harry24k / AEPW-pytorchLinks
A pytorch implementation of "Adversarial Examples in the Physical World"
☆17Updated 5 years ago
Alternatives and similar repositories for AEPW-pytorch
Users that are interested in AEPW-pytorch are comparing it to the libraries listed below
Sorting:
- ☆85Updated 4 years ago
- ☆41Updated last year
- ☆14Updated 4 years ago
- The code of our AAAI 2021 paper "Detecting Adversarial Examples from Sensitivity Inconsistency of Spatial-transform Domain"☆16Updated 4 years ago
- a pytorch version of AdvGAN for cifar10 dataset☆10Updated 5 years ago
- Code for CVPR2020 paper QEBA: Query-Efficient Boundary-Based Blackbox Attack☆32Updated 4 years ago
- ☆26Updated 2 years ago
- Code for "PatchCleanser: Certifiably Robust Defense against Adversarial Patches for Any Image Classifier"☆41Updated 2 years ago
- ☆21Updated 4 years ago
- ☆51Updated 3 years ago
- Tensorflow implementation of Generating Adversarial Examples with Adversarial Networks☆43Updated 6 years ago
- Simple yet effective targeted transferable attack (NeurIPS 2021)☆51Updated 2 years ago
- My entry for ICLR 2018 Reproducibility Challenge for paper Synthesizing robust adversarial examples https://openreview.net/pdf?id=BJDH5M-…☆73Updated 7 years ago
- The code of ICCV2021 paper "Meta Gradient Adversarial Attack"☆24Updated 3 years ago
- ☆19Updated 3 years ago
- CVPR 2021 Official repository for the Data-Free Model Extraction paper. https://arxiv.org/abs/2011.14779☆72Updated last year
- A pytorch implementation of "Towards Evaluating the Robustness of Neural Networks"☆57Updated 5 years ago
- Code for "Diversity can be Transferred: Output Diversification for White- and Black-box Attacks"☆53Updated 4 years ago
- Pytorch implementation of Adversarial Patch on ImageNet (arXiv: https://arxiv.org/abs/1712.09665)☆62Updated 5 years ago
- an efficient method for detecting adversarial image examples☆19Updated 7 years ago
- ☆60Updated 3 years ago
- Paper list of Adversarial Examples☆48Updated last year
- Repository for Certified Defenses for Adversarial Patch ICLR-2020☆32Updated 4 years ago
- Implementation of Targeted Backdoor Attacks on Deep Learning Systems Using Data Poisoning paper☆20Updated 5 years ago
- ☆70Updated 4 years ago
- Code for "Adversarial Camouflage: Hiding Physical World Attacks with Natural Styles" (CVPR 2020)☆92Updated 2 years ago
- A PyTorch implementation of universal adversarial perturbation (UAP) which is more easy to understand and implement.☆56Updated 3 years ago
- Enhancing the Transferability of Adversarial Attacks through Variance Tuning☆87Updated last year
- Universal Adversarial Perturbations (UAPs) for PyTorch☆48Updated 3 years ago
- Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks☆17Updated 6 years ago