Jianbo-Lab / ML-LOO
☆13Updated 4 years ago
Related projects ⓘ
Alternatives and complementary repositories for ML-LOO
- ☆48Updated 2 years ago
- Code for "Diversity can be Transferred: Output Diversification for White- and Black-box Attacks"☆53Updated 4 years ago
- Official Tensorflow implementation for "Improving Adversarial Transferability via Neuron Attribution-based Attacks" (CVPR 2022)☆33Updated last year
- Code for our NeurIPS 2020 paper Backpropagating Linearly Improves Transferability of Adversarial Examples.☆42Updated last year
- ☆55Updated 2 years ago
- The code of our AAAI 2021 paper "Detecting Adversarial Examples from Sensitivity Inconsistency of Spatial-transform Domain"☆14Updated 3 years ago
- ☆40Updated last year
- Codes for ICLR 2020 paper "Skip Connections Matter: On the Transferability of Adversarial Examples Generated with ResNets"☆70Updated 4 years ago
- Code for "PatchCleanser: Certifiably Robust Defense against Adversarial Patches for Any Image Classifier"☆35Updated last year
- A PyTorch implementation of universal adversarial perturbation (UAP) which is more easy to understand and implement.☆54Updated 2 years ago
- Code for "Adversarial Camouflage: Hiding Physical World Attacks with Natural Styles" (CVPR 2020)☆87Updated last year
- Simple yet effective targeted transferable attack (NeurIPS 2021)☆47Updated 2 years ago
- A simple implementation of BadNets on MNIST☆32Updated 5 years ago
- code for "Feature Importance-aware Transferable Adversarial Attacks"☆77Updated 2 years ago
- Input Purification Defense Against Trojan Attacks on Deep Neural Network Systems☆24Updated 3 years ago
- ☆11Updated last year
- ☆76Updated 3 years ago
- [ICLR2023] Distilling Cognitive Backdoor Patterns within an Image☆31Updated last month
- The implementation of our paper: Composite Adversarial Attacks (AAAI2021)☆30Updated 2 years ago
- ☆26Updated 2 years ago
- Enhancing the Transferability of Adversarial Attacks through Variance Tuning☆81Updated 8 months ago
- ☆79Updated 3 years ago
- Attacking a dog vs fish classification that uses transfer learning inceptionV3☆68Updated 6 years ago
- Strongest attack against Feature Scatter and Adversarial Interpolation☆24Updated 4 years ago
- ICCV 2021, We find most existing triggers of backdoor attacks in deep learning contain severe artifacts in the frequency domain. This Rep…☆41Updated 2 years ago
- Code for our NeurIPS 2020 paper Practical No-box Adversarial Attacks against DNNs.☆33Updated 3 years ago
- Pytorch implementation for MagNet: a Two-Pronged Defense against Adversarial Examples☆14Updated 5 years ago
- Code for ICLR2020 "Improving Adversarial Robustness Requires Revisiting Misclassified Examples"☆144Updated 4 years ago
- Universal Adversarial Perturbations (UAPs) for PyTorch☆46Updated 3 years ago
- Boosting the Transferability of Adversarial Attacks with Reverse Adversarial Perturbation (NeurIPS 2022)☆33Updated last year