pralab / IndicatorsOfAttackFailure
Indicators of Attack Failure: Debugging and Improving Optimization of Adversarial Examples
☆18Updated 2 years ago
Alternatives and similar repositories for IndicatorsOfAttackFailure:
Users that are interested in IndicatorsOfAttackFailure are comparing it to the libraries listed below
- Foolbox implementation for NeurIPS 2021 Paper: "Fast Minimum-norm Adversarial Attacks through Adaptive Norm Constraints".☆25Updated 2 years ago
- ☆23Updated 3 years ago
- Universal Adversarial Perturbations (UAPs) for PyTorch☆48Updated 3 years ago
- Code for paper "PatchGuard: A Provably Robust Defense against Adversarial Patches via Small Receptive Fields and Masking"☆64Updated 2 years ago
- Repository for Certified Defenses for Adversarial Patch ICLR-2020☆32Updated 4 years ago
- ☆11Updated last year
- Craft poisoned data using MetaPoison☆49Updated 3 years ago
- This is the implementation for CVPR 2022 Oral paper "Better Trigger Inversion Optimization in Backdoor Scanning."☆24Updated 2 years ago
- ☆64Updated 4 years ago
- Code for AAAI 2021 "Towards Feature Space Adversarial Attack".☆25Updated 3 years ago
- Fine-Pruning: Defending Against Backdooring Attacks on Deep Neural Networks (RAID 2018)☆46Updated 6 years ago
- [NDSS'23] BEAGLE: Forensics of Deep Learning Backdoor Attack for Better Defense☆16Updated 8 months ago
- ☆21Updated 4 years ago
- ☆10Updated 3 years ago
- The code is for our NeurIPS 2019 paper: https://arxiv.org/abs/1910.04749☆32Updated 4 years ago
- ☆40Updated last year
- ☆17Updated 2 years ago
- RAB: Provable Robustness Against Backdoor Attacks☆39Updated last year
- PyTorch implementation of our ICLR 2023 paper titled "Is Adversarial Training Really a Silver Bullet for Mitigating Data Poisoning?".☆12Updated last year
- ☆25Updated 2 years ago
- Code Repository for the Paper ---Revisiting the Assumption of Latent Separability for Backdoor Defenses (ICLR 2023)☆36Updated last year
- ☆50Updated 3 years ago
- ☆83Updated 3 years ago
- Code and data for the ICLR 2021 paper "Perceptual Adversarial Robustness: Defense Against Unseen Threat Models".☆55Updated 3 years ago
- ☆11Updated 5 years ago
- Code for "On Adaptive Attacks to Adversarial Example Defenses"☆85Updated 3 years ago
- ABS: Scanning Neural Networks for Back-doors by Artificial Brain Stimulation☆49Updated 2 years ago
- Code for "On the Trade-off between Adversarial and Backdoor Robustness" (NIPS 2020)☆17Updated 4 years ago
- Code for identifying natural backdoors in existing image datasets.☆15Updated 2 years ago
- Attacking a dog vs fish classification that uses transfer learning inceptionV3☆71Updated 6 years ago