DebangLi / one-pixel-attack-pytorch
Pytorch reimplementation of "One pixel attack for fooling deep neural networks"
☆85Updated 7 years ago
Alternatives and similar repositories for one-pixel-attack-pytorch:
Users that are interested in one-pixel-attack-pytorch are comparing it to the libraries listed below
- PyTorch library for adversarial attack and training☆145Updated 6 years ago
- The translation-invariant adversarial attack method to improve the transferability of adversarial examples.☆141Updated last year
- Improving Transferability of Adversarial Examples with Input Diversity☆163Updated 5 years ago
- Implementation of the Boundary Attack algorithm as described in Brendel, Wieland, Jonas Rauber, and Matthias Bethge. "Decision-Based Adve…☆95Updated 4 years ago
- This is PyTorch Implementation of Universal Adversarial Perturbation (https://arxiv.org/abs/1610.08401)☆43Updated 6 years ago
- Generative Adversarial Perturbations (CVPR 2018)☆137Updated 4 years ago
- The code for ComDefend: An Efficient Image Compression Model to Defend Adversarial Examples (CVPR2019)☆113Updated 2 years ago
- A rich-documented PyTorch implementation of Carlini-Wagner's L2 attack.☆60Updated 6 years ago
- Mitigating Adversarial Effects Through Randomization☆121Updated 7 years ago
- Code for ICML 2019 paper "Simple Black-box Adversarial Attacks"☆198Updated 2 years ago
- [ICCV 2019] Enhancing Adversarial Example Transferability with an Intermediate Level Attack (https://arxiv.org/abs/1907.10823)☆78Updated 5 years ago
- Code for ICLR2020 "Improving Adversarial Robustness Requires Revisiting Misclassified Examples"☆147Updated 4 years ago
- Generalized Data-free Universal Adversarial Perturbations☆70Updated 6 years ago
- ☆40Updated last year
- Blackbox attacks for deep neural network models☆69Updated 6 years ago
- Code for "Diversity can be Transferred: Output Diversification for White- and Black-box Attacks"☆53Updated 4 years ago
- Code for "Adversarial Camouflage: Hiding Physical World Attacks with Natural Styles" (CVPR 2020)☆91Updated 2 years ago
- Adversarial Defense by Restricting the Hidden Space of Deep Neural Networks, in ICCV 2019☆59Updated 5 years ago
- ZOO: Zeroth Order Optimization based Black-box Attacks to Deep Neural Networks☆170Updated 3 years ago
- The implementation of 'Curls & Whey: Boosting Black-Box Adversarial Attacks' in pytorch☆60Updated 5 years ago
- A pytorch implementation of "Explaining and harnessing adversarial examples"☆67Updated 5 years ago
- Codes for ICLR 2020 paper "Skip Connections Matter: On the Transferability of Adversarial Examples Generated with ResNets"☆71Updated 4 years ago
- Code for the unrestricted adversarial examples paper (NeurIPS 2018)☆64Updated 5 years ago
- ☆85Updated 4 years ago
- Attacks Which Do Not Kill Training Make Adversarial Learning Stronger (ICML2020 Paper)☆125Updated last year
- Data independent universal adversarial perturbations☆61Updated 5 years ago
- Adversarial Examples for Semantic Segmentation and Object Detection☆123Updated 7 years ago
- A fast sparse attack on deep neural networks.☆50Updated 4 years ago
- Official repository for "A Self-supervised Approach for Adversarial Robustness" (CVPR 2020--Oral)☆100Updated 3 years ago
- Official Repository for the CVPR 2020 AdvML Workshop paper "Role of Spatial Context in Adversarial Robustness for Object Detection"☆36Updated 4 years ago