sunbelbd / invisible_backdoor_attacks
☆19Updated 2 years ago
Alternatives and similar repositories for invisible_backdoor_attacks:
Users that are interested in invisible_backdoor_attacks are comparing it to the libraries listed below
- A minimal PyTorch implementation of Label-Consistent Backdoor Attacks☆30Updated 4 years ago
- ☆17Updated 3 years ago
- This is the implementation for CVPR 2022 Oral paper "Better Trigger Inversion Optimization in Backdoor Scanning."☆24Updated 2 years ago
- ☆19Updated 2 years ago
- Code for identifying natural backdoors in existing image datasets.☆15Updated 2 years ago
- Github repo for One-shot Neural Backdoor Erasing via Adversarial Weight Masking (NeurIPS 2022)☆15Updated 2 years ago
- Implementation of Targeted Backdoor Attacks on Deep Learning Systems Using Data Poisoning paper☆20Updated 4 years ago
- Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks☆17Updated 5 years ago
- ICCV 2021, We find most existing triggers of backdoor attacks in deep learning contain severe artifacts in the frequency domain. This Rep…☆43Updated 2 years ago
- Boosting the Transferability of Adversarial Attacks with Reverse Adversarial Perturbation (NeurIPS 2022)☆33Updated 2 years ago
- ☆11Updated 2 years ago
- Official Implementation of ICLR 2022 paper, ``Adversarial Unlearning of Backdoors via Implicit Hypergradient''☆54Updated 2 years ago
- ☆21Updated 4 years ago
- ☆11Updated 2 years ago
- ☆13Updated 3 years ago
- ☆26Updated 2 years ago
- Prediction Poisoning: Towards Defenses Against DNN Model Stealing Attacks (ICLR '20)☆29Updated 4 years ago
- Source code for ECCV 2022 Poster: Data-free Backdoor Removal based on Channel Lipschitzness☆30Updated 2 years ago
- RAB: Provable Robustness Against Backdoor Attacks☆39Updated last year
- Universal Adversarial Perturbations (UAPs) for PyTorch☆48Updated 3 years ago
- Code for "Label-Consistent Backdoor Attacks"☆55Updated 4 years ago
- ☆19Updated 6 months ago
- ☆26Updated 2 years ago
- ☆17Updated 3 years ago
- [IEEE S&P 2024] Exploring the Orthogonality and Linearity of Backdoor Attacks☆21Updated 2 months ago
- PyTorch implementation of our ICLR 2023 paper titled "Is Adversarial Training Really a Silver Bullet for Mitigating Data Poisoning?".☆12Updated 2 years ago
- ☆19Updated 2 years ago
- [ICLR2023] Distilling Cognitive Backdoor Patterns within an Image☆34Updated 5 months ago
- Implementation of the paper "MAZE: Data-Free Model Stealing Attack Using Zeroth-Order Gradient Estimation".☆32Updated 3 years ago
- ☆24Updated 2 years ago