YiZeng623 / frequency-backdoorLinks
ICCV 2021, We find most existing triggers of backdoor attacks in deep learning contain severe artifacts in the frequency domain. This Repo. explores how we can use these artifacts to develop stronger backdoor defenses and attacks.
☆44Updated 3 years ago
Alternatives and similar repositories for frequency-backdoor
Users that are interested in frequency-backdoor are comparing it to the libraries listed below
Sorting:
- Code Repository for the Paper ---Revisiting the Assumption of Latent Separability for Backdoor Defenses (ICLR 2023)☆41Updated 2 years ago
- [ICLR2023] Distilling Cognitive Backdoor Patterns within an Image☆35Updated 7 months ago
- ☆19Updated 2 years ago
- Official Implementation of ICLR 2022 paper, ``Adversarial Unlearning of Backdoors via Implicit Hypergradient''☆53Updated 2 years ago
- Github repo for One-shot Neural Backdoor Erasing via Adversarial Weight Masking (NeurIPS 2022)☆15Updated 2 years ago
- Codes for NeurIPS 2021 paper "Adversarial Neuron Pruning Purifies Backdoored Deep Models"☆57Updated 2 years ago
- Code for "Label-Consistent Backdoor Attacks"☆57Updated 4 years ago
- ☆27Updated 2 years ago
- [ICLR'21] Dataset Inference for Ownership Resolution in Machine Learning☆32Updated 2 years ago
- Anti-Backdoor learning (NeurIPS 2021)☆81Updated last year
- Boosting the Transferability of Adversarial Attacks with Reverse Adversarial Perturbation (NeurIPS 2022)☆33Updated 2 years ago
- Backdoor Safety Tuning (NeurIPS 2023 & 2024 Spotlight)☆26Updated 6 months ago
- Source code for ECCV 2022 Poster: Data-free Backdoor Removal based on Channel Lipschitzness☆32Updated 2 years ago
- ☆23Updated 2 years ago
- ☆19Updated 3 years ago
- A minimal PyTorch implementation of Label-Consistent Backdoor Attacks☆30Updated 4 years ago
- ☆44Updated last year
- Input Purification Defense Against Trojan Attacks on Deep Neural Network Systems☆28Updated 4 years ago
- This is the implementation for CVPR 2022 Oral paper "Better Trigger Inversion Optimization in Backdoor Scanning."☆24Updated 3 years ago
- ☆25Updated 2 years ago
- Code for the paper "Autoregressive Perturbations for Data Poisoning" (NeurIPS 2022)☆20Updated 8 months ago
- Simple yet effective targeted transferable attack (NeurIPS 2021)☆51Updated 2 years ago
- Reconstructive Neuron Pruning for Backdoor Defense (ICML 2023)☆37Updated last year
- Code repository for the paper --- [USENIX Security 2023] Towards A Proactive ML Approach for Detecting Backdoor Poison Samples☆25Updated last year
- An Embarrassingly Simple Backdoor Attack on Self-supervised Learning☆16Updated last year
- The official implementation of USENIX Security'23 paper "Meta-Sift" -- Ten minutes or less to find a 1000-size or larger clean subset on …☆18Updated 2 years ago
- Code for Transferable Unlearnable Examples☆20Updated 2 years ago
- Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks☆17Updated 6 years ago
- Prediction Poisoning: Towards Defenses Against DNN Model Stealing Attacks (ICLR '20)☆31Updated 4 years ago
- APBench: A Unified Availability Poisoning Attack and Defenses Benchmark (TMLR 08/2024)☆30Updated last month