Carco-git / CW_Attack_on_MNIST
Reproduction of cw attack on pytorch with corresponding MNIST model
☆22Updated 4 years ago
Related projects: ⓘ
- Final Project for AM 207, Fall 2021. Review & experimentation with paper "Adversarial Examples Are Not Bugs, They Are Features"☆9Updated 2 years ago
- AdvAttacks; adversarial examples; FGSM;JSMA;CW;single pixel attack; local search attack;deepfool☆54Updated 5 years ago
- ☆48Updated 2 years ago
- Using relativism to improve GAN-based Adversarial Attacks. 🦾☆39Updated last year
- A pytorch implementation of "Towards Evaluating the Robustness of Neural Networks"☆52Updated 5 years ago
- Enhancing the Transferability of Adversarial Attacks through Variance Tuning☆81Updated 6 months ago
- ☆66Updated 3 years ago
- ☆24Updated last year
- The implementation of our paper: Composite Adversarial Attacks (AAAI2021)☆30Updated 2 years ago
- The code of our AAAI 2021 paper "Detecting Adversarial Examples from Sensitivity Inconsistency of Spatial-transform Domain"☆14Updated 3 years ago
- ☆63Updated 3 years ago
- Decision-based Adversarial Attack with Frequency Mixup☆20Updated last year
- Implementation of gradient-based adversarial attack(FGSM,MI-FGSM,PGD)☆74Updated 3 years ago
- Prediction Poisoning: Towards Defenses Against DNN Model Stealing Attacks (ICLR '20)☆29Updated 3 years ago
- Official Tensorflow implementation for "Improving Adversarial Transferability via Neuron Attribution-based Attacks" (CVPR 2022)☆33Updated last year
- Code for ICLR2020 "Improving Adversarial Robustness Requires Revisiting Misclassified Examples"☆143Updated 3 years ago
- Codes for CVPR2020 paper "Towards Transferable Targeted Attack".☆14Updated 2 years ago
- A pytorch implementation of "Towards Deep Learning Models Resistant to Adversarial Attacks"☆144Updated 5 years ago
- code for "Feature Importance-aware Transferable Adversarial Attacks"☆73Updated 2 years ago
- A PyTorch implementation of universal adversarial perturbation (UAP) which is more easy to understand and implement.☆53Updated 2 years ago
- Using FGSM, I-FGSM and MI-FGSM to generate and evaluate adversarial samples.☆11Updated 5 years ago
- ☆22Updated last year
- Towards Efficient and Effective Adversarial Training, NeurIPS 2021☆16Updated 2 years ago
- A minimal PyTorch implementation of Label-Consistent Backdoor Attacks☆25Updated 3 years ago
- A pytorch implementation of "Adversarial Examples in the Physical World"☆17Updated 5 years ago
- ☆11Updated 4 years ago
- Implementation of the paper "MAZE: Data-Free Model Stealing Attack Using Zeroth-Order Gradient Estimation".☆28Updated 2 years ago
- Fine-Pruning: Defending Against Backdooring Attacks on Deep Neural Networks (RAID 2018)☆47Updated 5 years ago
- Attacking a dog vs fish classification that uses transfer learning inceptionV3☆67Updated 6 years ago
- Code for "Label-Consistent Backdoor Attacks"☆48Updated 3 years ago