pralab / IndicatorsOfAttackFailureLinks
Indicators of Attack Failure: Debugging and Improving Optimization of Adversarial Examples
☆19Updated 3 years ago
Alternatives and similar repositories for IndicatorsOfAttackFailure
Users that are interested in IndicatorsOfAttackFailure are comparing it to the libraries listed below
Sorting:
- Craft poisoned data using MetaPoison☆54Updated 4 years ago
- Code for AAAI 2021 "Towards Feature Space Adversarial Attack".☆30Updated 4 years ago
- This is the implementation for CVPR 2022 Oral paper "Better Trigger Inversion Optimization in Backdoor Scanning."☆24Updated 3 years ago
- Code for generating adversarial color-shifted images☆19Updated 6 years ago
- Code for "On Adaptive Attacks to Adversarial Example Defenses"☆87Updated 4 years ago
- ☆22Updated 4 years ago
- RAB: Provable Robustness Against Backdoor Attacks☆39Updated 2 years ago
- Attacking a dog vs fish classification that uses transfer learning inceptionV3☆74Updated 7 years ago
- ☆26Updated 6 years ago
- ☆88Updated 4 years ago
- PyTorch implementation of our ICLR 2023 paper titled "Is Adversarial Training Really a Silver Bullet for Mitigating Data Poisoning?".☆12Updated 2 years ago
- ☆42Updated 2 years ago
- ☆19Updated 4 years ago
- ☆11Updated 2 years ago
- ConvexPolytopePosioning☆37Updated 6 years ago
- ☆13Updated 4 years ago
- Universal Adversarial Perturbations (UAPs) for PyTorch☆49Updated 4 years ago
- Implementation of our ICLR 2021 paper: Policy-Driven Attack: Learning to Query for Hard-label Black-box Adversarial Examples.☆11Updated 4 years ago
- ReColorAdv and other attacks from the NeurIPS 2019 paper "Functional Adversarial Attacks"☆38Updated 3 years ago
- ☆68Updated 5 years ago
- Code for "Diversity can be Transferred: Output Diversification for White- and Black-box Attacks"☆53Updated 5 years ago
- ☆84Updated 4 years ago
- Input-aware Dynamic Backdoor Attack (NeurIPS 2020)☆36Updated last year
- Code and data for the ICLR 2021 paper "Perceptual Adversarial Robustness: Defense Against Unseen Threat Models".☆56Updated 3 years ago
- ☆58Updated 3 years ago
- The code is for our NeurIPS 2019 paper: https://arxiv.org/abs/1910.04749☆34Updated 5 years ago
- Code for our NeurIPS 2020 paper Backpropagating Linearly Improves Transferability of Adversarial Examples.☆42Updated 2 years ago
- Fine-Pruning: Defending Against Backdooring Attacks on Deep Neural Networks (RAID 2018)☆47Updated 7 years ago
- ☆21Updated 3 years ago
- ☆69Updated last year