DebangLi / one-pixel-attack-pytorchLinks
Pytorch reimplementation of "One pixel attack for fooling deep neural networks"
☆85Updated 7 years ago
Alternatives and similar repositories for one-pixel-attack-pytorch
Users that are interested in one-pixel-attack-pytorch are comparing it to the libraries listed below
Sorting:
- Improving Transferability of Adversarial Examples with Input Diversity☆164Updated 6 years ago
- PyTorch library for adversarial attack and training☆145Updated 6 years ago
- Mitigating Adversarial Effects Through Randomization☆120Updated 7 years ago
- Code for ICML 2019 paper "Simple Black-box Adversarial Attacks"☆198Updated 2 years ago
- A targeted adversarial attack method, which won the NIPS 2017 targeted adversarial attacks competition☆133Updated 7 years ago
- Generative Adversarial Perturbations (CVPR 2018)☆138Updated 4 years ago
- The translation-invariant adversarial attack method to improve the transferability of adversarial examples.☆141Updated last year
- Implementation of the Boundary Attack algorithm as described in Brendel, Wieland, Jonas Rauber, and Matthias Bethge. "Decision-Based Adve…☆96Updated 4 years ago
- This is PyTorch Implementation of Universal Adversarial Perturbation (https://arxiv.org/abs/1610.08401)☆45Updated 6 years ago
- Data independent universal adversarial perturbations☆62Updated 5 years ago
- The code for ComDefend: An Efficient Image Compression Model to Defend Adversarial Examples (CVPR2019)☆114Updated 2 years ago
- Blackbox attacks for deep neural network models☆70Updated 6 years ago
- Physical adversarial attack for fooling the Faster R-CNN object detector☆165Updated 5 years ago
- Code for "Diversity can be Transferred: Output Diversification for White- and Black-box Attacks"☆53Updated 4 years ago
- Codes for ICLR 2020 paper "Skip Connections Matter: On the Transferability of Adversarial Examples Generated with ResNets"☆70Updated 4 years ago
- Adversarial Examples for Semantic Segmentation and Object Detection☆123Updated 7 years ago
- Official repository for "A Self-supervised Approach for Adversarial Robustness" (CVPR 2020--Oral)☆100Updated 4 years ago
- Adversarial Defense by Restricting the Hidden Space of Deep Neural Networks, in ICCV 2019☆58Updated 5 years ago
- A rich-documented PyTorch implementation of Carlini-Wagner's L2 attack.☆60Updated 6 years ago
- ☆41Updated last year
- Code for our CVPR 2018 paper, "On the Robustness of Semantic Segmentation Models to Adversarial Attacks"☆100Updated 6 years ago
- Robustness vs Accuracy Survey on ImageNet☆98Updated 3 years ago
- Generalized Data-free Universal Adversarial Perturbations☆69Updated 6 years ago
- ZOO: Zeroth Order Optimization based Black-box Attacks to Deep Neural Networks☆169Updated 3 years ago
- Code for ICLR2020 "Improving Adversarial Robustness Requires Revisiting Misclassified Examples"☆150Updated 4 years ago
- [ICCV 2019] Enhancing Adversarial Example Transferability with an Intermediate Level Attack (https://arxiv.org/abs/1907.10823)☆78Updated 5 years ago
- A fast sparse attack on deep neural networks.☆50Updated 4 years ago
- The implementation of 'Curls & Whey: Boosting Black-Box Adversarial Attacks' in pytorch☆60Updated 6 years ago
- A pytorch implementation of "Towards Evaluating the Robustness of Neural Networks"☆57Updated 5 years ago
- Official Repository for the CVPR 2020 AdvML Workshop paper "Role of Spatial Context in Adversarial Robustness for Object Detection"☆36Updated 5 years ago