Jianbo-Lab / ML-LOOLinks
☆15Updated 5 years ago
Alternatives and similar repositories for ML-LOO
Users that are interested in ML-LOO are comparing it to the libraries listed below
Sorting:
- ☆51Updated 3 years ago
- The code of our AAAI 2021 paper "Detecting Adversarial Examples from Sensitivity Inconsistency of Spatial-transform Domain"☆16Updated 4 years ago
- ☆85Updated 4 years ago
- Simple yet effective targeted transferable attack (NeurIPS 2021)☆51Updated 2 years ago
- This is an implementation demo of the ICLR 2021 paper [Neural Attention Distillation: Erasing Backdoor Triggers from Deep Neural Networks…☆122Updated 3 years ago
- ☆82Updated 3 years ago
- Detection of adversarial examples using influence functions and nearest neighbors☆36Updated 2 years ago
- Input-aware Dynamic Backdoor Attack (NeurIPS 2020)☆35Updated 11 months ago
- Attacking a dog vs fish classification that uses transfer learning inceptionV3☆70Updated 7 years ago
- ☆26Updated 3 years ago
- Code for "Diversity can be Transferred: Output Diversification for White- and Black-box Attacks"☆53Updated 4 years ago
- My entry for ICLR 2018 Reproducibility Challenge for paper Synthesizing robust adversarial examples https://openreview.net/pdf?id=BJDH5M-…☆73Updated 7 years ago
- ☆58Updated 2 years ago
- Code for "PatchCleanser: Certifiably Robust Defense against Adversarial Patches for Any Image Classifier"☆42Updated 2 years ago
- A PyTorch implementation of universal adversarial perturbation (UAP) which is more easy to understand and implement.☆56Updated 3 years ago
- ☆60Updated 3 years ago
- Code for generating adversarial color-shifted images☆19Updated 5 years ago
- ☆25Updated 5 years ago
- This is for releasing the source code of the ACSAC paper "STRIP: A Defence Against Trojan Attacks on Deep Neural Networks"☆57Updated 8 months ago
- Universal Adversarial Perturbations (UAPs) for PyTorch☆48Updated 3 years ago
- Implementation of the Boundary Attack algorithm as described in Brendel, Wieland, Jonas Rauber, and Matthias Bethge. "Decision-Based Adve…☆98Updated 4 years ago
- code for "Feature Importance-aware Transferable Adversarial Attacks"☆82Updated 3 years ago
- A minimal PyTorch implementation of Label-Consistent Backdoor Attacks☆30Updated 4 years ago
- Code for ICLR2020 "Improving Adversarial Robustness Requires Revisiting Misclassified Examples"☆150Updated 4 years ago
- ☆41Updated last year
- Enhancing the Transferability of Adversarial Attacks through Variance Tuning☆88Updated last year
- A simple implementation of BadNets on MNIST☆34Updated 5 years ago
- A pytorch implementation of "Adversarial Examples in the Physical World"☆17Updated 5 years ago
- Code for our NeurIPS 2020 paper Backpropagating Linearly Improves Transferability of Adversarial Examples.☆42Updated 2 years ago
- Code for "Adversarial Camouflage: Hiding Physical World Attacks with Natural Styles" (CVPR 2020)☆92Updated 2 years ago