as791 / Adversarial-Example-Attack-and-Defense
This repository contains the implementation of three adversarial example attack methods FGSM, IFGSM, MI-FGSM and one Distillation as defense against all attacks using MNIST dataset.
☆127Updated 4 years ago
Alternatives and similar repositories for Adversarial-Example-Attack-and-Defense
Users that are interested in Adversarial-Example-Attack-and-Defense are comparing it to the libraries listed below
Sorting:
- AdvAttacks; adversarial examples; FGSM;JSMA;CW;single pixel attack; local search attack;deepfool☆58Updated 5 years ago
- Pytorch implementation of Adversarial Patch on ImageNet (arXiv: https://arxiv.org/abs/1712.09665)☆63Updated 5 years ago
- Using relativism to improve GAN-based Adversarial Attacks. 🦾☆43Updated 2 years ago
- Implementation of gradient-based adversarial attack(FGSM,MI-FGSM,PGD)☆89Updated 3 years ago
- Enhancing the Transferability of Adversarial Attacks through Variance Tuning☆87Updated last year
- Invisible Backdoor Attack with Sample-Specific Triggers☆94Updated 2 years ago
- Using FGSM, I-FGSM and MI-FGSM to generate and evaluate adversarial samples.☆12Updated 5 years ago
- ☆31Updated 3 years ago
- This is an implementation demo of the ICLR 2021 paper [Neural Attention Distillation: Erasing Backdoor Triggers from Deep Neural Networks…☆122Updated 3 years ago
- ☆85Updated 4 years ago
- Patch-wise iterative attack (accepted by ECCV 2020) to improve the transferability of adversarial examples.☆90Updated 3 years ago
- A pytorch implementation of "Explaining and harnessing adversarial examples"☆67Updated 5 years ago
- An adversarial attack on object detectors☆151Updated 3 years ago
- The code of ICCV2021 paper "Meta Gradient Adversarial Attack"☆24Updated 3 years ago
- A pytorch implementation of "Towards Evaluating the Robustness of Neural Networks"☆57Updated 5 years ago
- Official Repository for the AAAI-20 paper "Hidden Trigger Backdoor Attacks"☆127Updated last year
- A curated list of papers on adversarial machine learning (adversarial examples and defense methods).☆210Updated 2 years ago
- Input Purification Defense Against Trojan Attacks on Deep Neural Network Systems☆27Updated 4 years ago
- ICCV 2021☆20Updated 3 years ago
- Paper list of Adversarial Examples☆48Updated last year
- ☆51Updated 3 years ago
- Code for Adv-watermark: A novel watermark perturbation for adversarial examples (ACM MM2020)☆41Updated 4 years ago
- ☆70Updated 3 years ago
- 对抗样本(Adversarial Examples)和投毒攻击(Poisoning Attacks)相关资料☆116Updated 5 years ago
- Code for "Adversarial attack by dropping information." (ICCV 2021)☆75Updated 3 years ago
- A pytorch implementation of "Adversarial Examples in the Physical World"☆17Updated 5 years ago
- A pytorch implementation of "Towards Deep Learning Models Resistant to Adversarial Attacks"☆154Updated 5 years ago
- A novel data-free model stealing method based on GAN☆127Updated 2 years ago
- a Pytorch implementation of the paper "Generating Adversarial Examples with Adversarial Networks" (advGAN).☆270Updated 4 years ago
- This is for releasing the source code of the ACSAC paper "STRIP: A Defence Against Trojan Attacks on Deep Neural Networks"☆57Updated 6 months ago