YiZeng623 / frequency-backdoor
ICCV 2021, We find most existing triggers of backdoor attacks in deep learning contain severe artifacts in the frequency domain. This Repo. explores how we can use these artifacts to develop stronger backdoor defenses and attacks.
☆43Updated 2 years ago
Alternatives and similar repositories for frequency-backdoor:
Users that are interested in frequency-backdoor are comparing it to the libraries listed below
- Official Implementation of ICLR 2022 paper, ``Adversarial Unlearning of Backdoors via Implicit Hypergradient''☆53Updated 2 years ago
- ☆19Updated 2 years ago
- ☆26Updated 2 years ago
- [ICLR2023] Distilling Cognitive Backdoor Patterns within an Image☆34Updated 5 months ago
- Codes for NeurIPS 2021 paper "Adversarial Neuron Pruning Purifies Backdoored Deep Models"☆57Updated last year
- Boosting the Transferability of Adversarial Attacks with Reverse Adversarial Perturbation (NeurIPS 2022)☆33Updated 2 years ago
- ☆24Updated 2 years ago
- A minimal PyTorch implementation of Label-Consistent Backdoor Attacks☆30Updated 4 years ago
- APBench: A Unified Availability Poisoning Attack and Defenses Benchmark (TMLR 08/2024)☆30Updated 3 months ago
- Backdoor Safety Tuning (NeurIPS 2023 & 2024 Spotlight)☆25Updated 4 months ago
- [ICLR'21] Dataset Inference for Ownership Resolution in Machine Learning☆32Updated 2 years ago
- Code Repository for the Paper ---Revisiting the Assumption of Latent Separability for Backdoor Defenses (ICLR 2023)☆40Updated 2 years ago
- Anti-Backdoor learning (NeurIPS 2021)☆81Updated last year
- Github repo for One-shot Neural Backdoor Erasing via Adversarial Weight Masking (NeurIPS 2022)☆15Updated 2 years ago
- Code for the paper "Autoregressive Perturbations for Data Poisoning" (NeurIPS 2022)☆20Updated 7 months ago
- Code for "Label-Consistent Backdoor Attacks"☆55Updated 4 years ago
- This is the implementation for CVPR 2022 Oral paper "Better Trigger Inversion Optimization in Backdoor Scanning."☆24Updated 3 years ago
- Prediction Poisoning: Towards Defenses Against DNN Model Stealing Attacks (ICLR '20)☆29Updated 4 years ago
- ☆23Updated 10 months ago
- ☆12Updated 3 years ago
- Source code for ECCV 2022 Poster: Data-free Backdoor Removal based on Channel Lipschitzness☆30Updated 2 years ago
- ☆21Updated 4 years ago
- PyTorch implementation of our ICLR 2023 paper titled "Is Adversarial Training Really a Silver Bullet for Mitigating Data Poisoning?".☆12Updated 2 years ago
- ☆81Updated 3 years ago
- Input Purification Defense Against Trojan Attacks on Deep Neural Network Systems☆27Updated 4 years ago
- Code for identifying natural backdoors in existing image datasets.☆15Updated 2 years ago
- ☆19Updated 3 years ago
- This is for releasing the source code of the ACSAC paper "STRIP: A Defence Against Trojan Attacks on Deep Neural Networks"☆55Updated 5 months ago
- Defending against Model Stealing via Verifying Embedded External Features☆36Updated 3 years ago
- [AAAI'21] Deep Feature Space Trojan Attack of Neural Networks by Controlled Detoxification☆28Updated 3 months ago