DebangLi / one-pixel-attack-pytorch
Pytorch reimplementation of "One pixel attack for fooling deep neural networks"
☆84Updated 6 years ago
Related projects ⓘ
Alternatives and complementary repositories for one-pixel-attack-pytorch
- PyTorch library for adversarial attack and training☆143Updated 5 years ago
- Improving Transferability of Adversarial Examples with Input Diversity☆162Updated 5 years ago
- Generative Adversarial Perturbations (CVPR 2018)☆136Updated 3 years ago
- Adversarial Defense by Restricting the Hidden Space of Deep Neural Networks, in ICCV 2019☆59Updated 5 years ago
- The translation-invariant adversarial attack method to improve the transferability of adversarial examples.☆139Updated last year
- A rich-documented PyTorch implementation of Carlini-Wagner's L2 attack.☆59Updated 6 years ago
- Code for ICLR2020 "Improving Adversarial Robustness Requires Revisiting Misclassified Examples"☆144Updated 4 years ago
- The implementation of 'Curls & Whey: Boosting Black-Box Adversarial Attacks' in pytorch☆60Updated 5 years ago
- This is PyTorch Implementation of Universal Adversarial Perturbation (https://arxiv.org/abs/1610.08401)☆44Updated 5 years ago
- Physical adversarial attack for fooling the Faster R-CNN object detector☆156Updated 4 years ago
- Adversarial Examples for Semantic Segmentation and Object Detection☆122Updated 6 years ago
- [ICCV 2019] Enhancing Adversarial Example Transferability with an Intermediate Level Attack (https://arxiv.org/abs/1907.10823)☆76Updated 5 years ago
- Mitigating Adversarial Effects Through Randomization☆118Updated 6 years ago
- Code for ICML 2019 paper "Simple Black-box Adversarial Attacks"☆195Updated last year
- Implementation of the Boundary Attack algorithm as described in Brendel, Wieland, Jonas Rauber, and Matthias Bethge. "Decision-Based Adve…☆92Updated 3 years ago
- Code for "Adversarial Camouflage: Hiding Physical World Attacks with Natural Styles" (CVPR 2020)☆87Updated last year
- A targeted adversarial attack method, which won the NIPS 2017 targeted adversarial attacks competition☆129Updated 6 years ago
- Attacks Which Do Not Kill Training Make Adversarial Learning Stronger (ICML2020 Paper)☆124Updated last year
- A pytorch implementation of "Towards Evaluating the Robustness of Neural Networks"☆53Updated 5 years ago
- ZOO: Zeroth Order Optimization based Black-box Attacks to Deep Neural Networks☆166Updated 3 years ago
- PyTorch implementation of "One Pixel Attack for Fooling Deep Neural Networks"☆23Updated 6 years ago
- Official repository for "A Self-supervised Approach for Adversarial Robustness" (CVPR 2020--Oral)☆97Updated 3 years ago
- Simple pytorch implementation of FGSM and I-FGSM☆276Updated 6 years ago
- Data independent universal adversarial perturbations☆60Updated 4 years ago
- [ICLR 2020] A repository for extremely fast adversarial training using FGSM☆435Updated 3 months ago
- Robustness vs Accuracy Survey on ImageNet☆99Updated 3 years ago
- Code for "Diversity can be Transferred: Output Diversification for White- and Black-box Attacks"☆53Updated 4 years ago
- Generalized Data-free Universal Adversarial Perturbations☆69Updated 6 years ago
- The code for ComDefend: An Efficient Image Compression Model to Defend Adversarial Examples (CVPR2019)☆113Updated 2 years ago
- Code for the CVPR 2019 article "Decoupling Direction and Norm for Efficient Gradient-Based L2 Adversarial Attacks and Defenses"☆134Updated 3 years ago