wang-yutao / Attack_Fashion_MNIST
图像分类模型的对抗攻击和对抗训练(使用Fashion MNIST数据集)
☆8Updated 4 years ago
Alternatives and similar repositories for Attack_Fashion_MNIST:
Users that are interested in Attack_Fashion_MNIST are comparing it to the libraries listed below
- AdvAttacks; adversarial examples; FGSM;JSMA;CW;single pixel attack; local search attack;deepfool☆55Updated 5 years ago
- This is for releasing the source code of the ACSAC paper "STRIP: A Defence Against Trojan Attacks on Deep Neural Networks"☆54Updated 3 months ago
- Stochastic Variance Reduced Ensemble Adversarial Attack for Boosting the Adversarial Transferability☆24Updated 2 years ago
- Using relativism to improve GAN-based Adversarial Attacks. 🦾☆41Updated last year
- Official implementation of (CVPR 2022 Oral) Towards Practical Deployment-Stage Backdoor Attack on Deep Neural Networks.☆26Updated 2 years ago
- Official PyTorch implementation of "Towards Efficient Data Free Black-Box Adversarial Attack" (CVPR 2022)☆15Updated 2 years ago
- A pytorch implementation of "Towards Evaluating the Robustness of Neural Networks"☆55Updated 5 years ago
- This is the documentation of the Tensorflow/Keras implementation of Latent Backdoor Attacks. Please see the paper for details Latent Back…☆19Updated 3 years ago
- Official Implementation of ICLR 2022 paper, ``Adversarial Unlearning of Backdoors via Implicit Hypergradient''☆54Updated 2 years ago
- ☆31Updated 2 years ago
- A curated list of papers for the transferability of adversarial examples☆60Updated 7 months ago
- This is the implementation for CVPR 2022 Oral paper "Better Trigger Inversion Optimization in Backdoor Scanning."☆24Updated 2 years ago
- ☆70Updated 3 years ago
- Official Pytorch implementation for "Transferable Adversarial Attacks on Vision Transformers with Token Gradient Regularization" (CVPR 20…☆25Updated last year
- [ICCV 2023] "TRM-UAP: Enhancing the Transferability of Data-Free Universal Adversarial Perturbation via Truncated Ratio Maximization", Yi…☆10Updated 7 months ago
- Defending against Model Stealing via Verifying Embedded External Features☆35Updated 3 years ago
- Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks☆17Updated 5 years ago
- ☆79Updated 3 years ago
- Paper sharing in adversary related works☆45Updated this week
- Codes for NeurIPS 2021 paper "Adversarial Neuron Pruning Purifies Backdoored Deep Models"☆57Updated last year
- Boosting the Transferability of Adversarial Attacks with Reverse Adversarial Perturbation (NeurIPS 2022)☆33Updated 2 years ago
- A minimal PyTorch implementation of Label-Consistent Backdoor Attacks☆30Updated 4 years ago
- The code of our AAAI 2021 paper "Detecting Adversarial Examples from Sensitivity Inconsistency of Spatial-transform Domain"☆14Updated 3 years ago
- ☆11Updated last year
- Convert tensorflow model to pytorch model via [MMdnn](https://github.com/microsoft/MMdnn) for adversarial attacks.☆84Updated 2 years ago
- This is the source code for Data-free Backdoor. Our paper is accepted by the 32nd USENIX Security Symposium (USENIX Security 2023).☆31Updated last year
- Paper list of Adversarial Examples☆45Updated last year
- Input Purification Defense Against Trojan Attacks on Deep Neural Network Systems☆27Updated 3 years ago
- Code for "Label-Consistent Backdoor Attacks"☆52Updated 4 years ago
- [NeurIPS 2023] Boosting Adversarial Transferability by Achieving Flat Local Maxima☆28Updated 11 months ago