wang-yutao / Attack_Fashion_MNISTLinks
图像分类模型的对抗攻击和对抗训练(使用Fashion MNIST数据集)
☆9Updated 4 years ago
Alternatives and similar repositories for Attack_Fashion_MNIST
Users that are interested in Attack_Fashion_MNIST are comparing it to the libraries listed below
Sorting:
- This is for releasing the source code of the ACSAC paper "STRIP: A Defence Against Trojan Attacks on Deep Neural Networks"☆57Updated 8 months ago
- Paper list of Adversarial Examples☆51Updated last year
- TIFS2022: Decision-based Adversarial Attack with Frequency Mixup☆22Updated last year
- AdvAttacks; adversarial examples; FGSM;JSMA;CW;single pixel attack; local search attack;deepfool☆57Updated 5 years ago
- Using relativism to improve GAN-based Adversarial Attacks. 🦾☆44Updated 2 years ago
- Code for Natural Language Adversarial Attacks and Defenses in Word Level☆8Updated 4 years ago
- 对抗样本(Adversarial Examples)和投毒攻击(Poisoning Attacks)相关资料☆117Updated 6 years ago
- Stochastic Variance Reduced Ensemble Adversarial Attack for Boosting the Adversarial Transferability☆22Updated 2 years ago
- Invisible Backdoor Attack with Sample-Specific Triggers☆96Updated 2 years ago
- Convert tensorflow model to pytorch model via [MMdnn](https://github.com/microsoft/MMdnn) for adversarial attacks.☆89Updated 2 years ago
- A list of papers in NeurIPS 2022 related to adversarial attack and defense / AI security.☆72Updated 2 years ago
- Official Pytorch implementation for "Transferable Adversarial Attacks on Vision Transformers with Token Gradient Regularization" (CVPR 20…☆26Updated last year
- 对抗样本☆267Updated 2 years ago
- Simple PyTorch implementations of Badnets on MNIST and CIFAR10.☆177Updated 2 years ago
- [ICCV 2023] "TRM-UAP: Enhancing the Transferability of Data-Free Universal Adversarial Perturbation via Truncated Ratio Maximization", Yi…☆11Updated 11 months ago
- ☆79Updated 5 years ago
- ☆63Updated 4 years ago
- TransferAttack is a pytorch framework to boost the adversarial transferability for image classification.☆375Updated 3 weeks ago
- A list of recent adversarial attack and defense papers (including those on large language models)☆41Updated this week
- Final Project for AM 207, Fall 2021. Review & experimentation with paper "Adversarial Examples Are Not Bugs, They Are Features"☆10Updated 3 years ago
- Reproduction of cw attack on pytorch with corresponding MNIST model☆22Updated 4 years ago
- ☆22Updated 2 years ago
- Input Purification Defense Against Trojan Attacks on Deep Neural Network Systems☆28Updated 4 years ago
- ☆57Updated 2 years ago
- Revisiting Transferable Adversarial Images (arXiv)☆124Updated 4 months ago
- ☆14Updated last year
- [ACM MM 2023] Improving the Transferability of Adversarial Examples with Arbitrary Style Transfer.☆20Updated last year
- Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks☆17Updated 6 years ago
- The code of our AAAI 2021 paper "Detecting Adversarial Examples from Sensitivity Inconsistency of Spatial-transform Domain"☆16Updated 4 years ago
- ☆71Updated 4 years ago