inspire-group / PatchCleanserLinks
Code for "PatchCleanser: Certifiably Robust Defense against Adversarial Patches for Any Image Classifier"
☆42Updated 2 years ago
Alternatives and similar repositories for PatchCleanser
Users that are interested in PatchCleanser are comparing it to the libraries listed below
Sorting:
- Official Tensorflow implementation for "Improving Adversarial Transferability via Neuron Attribution-based Attacks" (CVPR 2022)☆34Updated 2 years ago
- A paper list for localized adversarial patch research☆154Updated last year
- Adversarial Robustness, White-box, Adversarial Attack☆50Updated 3 years ago
- Revisiting Transferable Adversarial Images (arXiv)☆124Updated 4 months ago
- A curated list of papers for the transferability of adversarial examples☆72Updated last year
- The official code of IEEE S&P 2024 paper "Why Does Little Robustness Help? A Further Step Towards Understanding Adversarial Transferabili…☆19Updated 10 months ago
- Paper sharing in adversary related works☆45Updated 2 months ago
- Enhancing the Transferability of Adversarial Attacks through Variance Tuning☆88Updated last year
- ☆71Updated 4 years ago
- Pytorch implementation of Adversarial Patch on ImageNet (arXiv: https://arxiv.org/abs/1712.09665)☆63Updated 5 years ago
- Simple yet effective targeted transferable attack (NeurIPS 2021)☆51Updated 2 years ago
- REAP: A Large-Scale Realistic Adversarial Patch Benchmark☆27Updated last year
- ☆51Updated 3 years ago
- Code for "Label-Consistent Backdoor Attacks"☆57Updated 4 years ago
- ☆82Updated 3 years ago
- Paper list of Adversarial Examples☆51Updated last year
- Stochastic Variance Reduced Ensemble Adversarial Attack for Boosting the Adversarial Transferability☆22Updated 2 years ago
- ☆60Updated 3 years ago
- A minimal PyTorch implementation of Label-Consistent Backdoor Attacks☆30Updated 4 years ago
- This is for releasing the source code of the ACSAC paper "STRIP: A Defence Against Trojan Attacks on Deep Neural Networks"☆57Updated 8 months ago
- Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks☆17Updated 6 years ago
- Convert tensorflow model to pytorch model via [MMdnn](https://github.com/microsoft/MMdnn) for adversarial attacks.☆89Updated 2 years ago
- Boosting the Transferability of Adversarial Attacks with Reverse Adversarial Perturbation (NeurIPS 2022)☆33Updated 2 years ago
- The implementation of our paper: Composite Adversarial Attacks (AAAI2021)☆30Updated 3 years ago
- The code of our AAAI 2021 paper "Detecting Adversarial Examples from Sensitivity Inconsistency of Spatial-transform Domain"☆16Updated 4 years ago
- Code for "Diversity can be Transferred: Output Diversification for White- and Black-box Attacks"☆53Updated 4 years ago
- code for "Feature Importance-aware Transferable Adversarial Attacks"☆82Updated 3 years ago
- LiangSiyuan21 / Parallel-Rectangle-Flip-Attack-A-Query-based-Black-box-Attack-against-Object-DetectionA Implementation of ICCV-2021(Parallel Rectangle Flip Attack: A Query-based Black-box Attack against Object Detection)☆28Updated 3 years ago
- ☆41Updated last year
- Official code for "Boosting the Adversarial Transferability of Surrogate Model with Dark Knowledge"☆11Updated last year