Jianbo-Lab / ML-LOO
☆14Updated 5 years ago
Alternatives and similar repositories for ML-LOO:
Users that are interested in ML-LOO are comparing it to the libraries listed below
- ☆50Updated 3 years ago
- A PyTorch implementation of universal adversarial perturbation (UAP) which is more easy to understand and implement.☆53Updated 2 years ago
- Code for "Diversity can be Transferred: Output Diversification for White- and Black-box Attacks"☆53Updated 4 years ago
- ☆57Updated 2 years ago
- Official Tensorflow implementation for "Improving Adversarial Transferability via Neuron Attribution-based Attacks" (CVPR 2022)☆34Updated last year
- Code for our NeurIPS 2020 paper Backpropagating Linearly Improves Transferability of Adversarial Examples.☆42Updated 2 years ago
- The code of our AAAI 2021 paper "Detecting Adversarial Examples from Sensitivity Inconsistency of Spatial-transform Domain"☆14Updated 3 years ago
- Codes for ICLR 2020 paper "Skip Connections Matter: On the Transferability of Adversarial Examples Generated with ResNets"☆71Updated 4 years ago
- Simple yet effective targeted transferable attack (NeurIPS 2021)☆48Updated 2 years ago
- The extension of "Patch-wise Attack for Fooling Deep Neural Network (ECCV2020)", and we aim to boost the success rates of targeted attack…☆27Updated 2 years ago
- Code for "Adversarial Camouflage: Hiding Physical World Attacks with Natural Styles" (CVPR 2020)☆89Updated last year
- A minimal PyTorch implementation of Label-Consistent Backdoor Attacks☆30Updated 4 years ago
- ReColorAdv and other attacks from the NeurIPS 2019 paper "Functional Adversarial Attacks"☆37Updated 2 years ago
- Source of the ECCV22 paper "LGV: Boosting Adversarial Example Transferability from Large Geometric Vicinity"☆19Updated last year
- ☆40Updated last year
- Strongest attack against Feature Scatter and Adversarial Interpolation☆25Updated 5 years ago
- Code for Black-Box Adversarial Attack with Transferable Model-based Embedding☆57Updated 4 years ago
- Enhancing the Transferability of Adversarial Attacks through Variance Tuning☆85Updated 11 months ago
- A pytorch implementation of "Towards Evaluating the Robustness of Neural Networks"☆55Updated 5 years ago
- code for "Feature Importance-aware Transferable Adversarial Attacks"☆80Updated 2 years ago
- [Machine Learning 2023] Imbalanced Gradients: A Subtle Cause of Overestimated Adversarial Robustness☆17Updated 7 months ago
- Code for "PatchCleanser: Certifiably Robust Defense against Adversarial Patches for Any Image Classifier"☆37Updated last year
- My entry for ICLR 2018 Reproducibility Challenge for paper Synthesizing robust adversarial examples https://openreview.net/pdf?id=BJDH5M-…☆72Updated 6 years ago
- ☆70Updated 3 years ago
- ☆83Updated 4 years ago
- ☆79Updated 3 years ago
- Codes for CVPR2020 paper "Towards Transferable Targeted Attack".☆15Updated 2 years ago
- Official Implementation of ICLR 2022 paper, ``Adversarial Unlearning of Backdoors via Implicit Hypergradient''☆54Updated 2 years ago
- Code for generating adversarial color-shifted images☆19Updated 5 years ago
- Input Purification Defense Against Trojan Attacks on Deep Neural Network Systems☆27Updated 3 years ago