reds-lab / Narcissus
The official implementation of the CCS'23 paper, Narcissus clean-label backdoor attack -- only takes THREE images to poison a face recognition dataset in a clean-label way and achieves a 99.89% attack success rate.
☆104Updated last year
Related projects ⓘ
Alternatives and complementary repositories for Narcissus
- Official Repository for the AAAI-20 paper "Hidden Trigger Backdoor Attacks"☆117Updated last year
- ☆16Updated last year
- ☆60Updated 9 months ago
- Code for "Label-Consistent Backdoor Attacks"☆49Updated 4 years ago
- ICCV 2021, We find most existing triggers of backdoor attacks in deep learning contain severe artifacts in the frequency domain. This Rep…☆41Updated 2 years ago
- Universal Adversarial Perturbations (UAPs) for PyTorch☆46Updated 3 years ago
- Official Implementation of ICLR 2022 paper, ``Adversarial Unlearning of Backdoors via Implicit Hypergradient''☆50Updated 2 years ago
- Input Purification Defense Against Trojan Attacks on Deep Neural Network Systems☆24Updated 3 years ago
- ☆25Updated last year
- Invisible Backdoor Attack with Sample-Specific Triggers☆91Updated 2 years ago
- This is the official implementation of ContraNet (NDSS2022).☆18Updated last year
- ☆48Updated 3 years ago
- Revisiting Transferable Adversarial Images (arXiv)☆113Updated last month
- A minimal PyTorch implementation of Label-Consistent Backdoor Attacks☆27Updated 3 years ago
- A paper list for localized adversarial patch research☆141Updated 10 months ago
- ☆17Updated 2 years ago
- WaNet - Imperceptible Warping-based Backdoor Attack (ICLR 2021)☆113Updated last week
- Simple yet effective targeted transferable attack (NeurIPS 2021)☆47Updated 2 years ago
- Anti-Backdoor learning (NeurIPS 2021)☆78Updated last year
- This is the repository for USENIX Security 2023 paper "Hard-label Black-box Universal Adversarial Patch Attack".☆14Updated last year
- Code Repository for the Paper ---Revisiting the Assumption of Latent Separability for Backdoor Defenses (ICLR 2023)☆34Updated last year
- Implementations of data poisoning attacks against neural networks and related defenses.☆67Updated 4 months ago
- Input-aware Dynamic Backdoor Attack (NeurIPS 2020)☆28Updated 4 months ago
- Defending against Model Stealing via Verifying Embedded External Features☆32Updated 2 years ago
- Boosting the Transferability of Adversarial Attacks with Reverse Adversarial Perturbation (NeurIPS 2022)☆33Updated last year
- A Pytroch Implementation of Some Backdoor Attack Algorithms, Including BadNets, SIG, FIBA, FTrojan ...☆13Updated 6 months ago
- This is the official implementation of our paper Untargeted Backdoor Attack against Object Detection.☆22Updated last year
- Official implementation of (CVPR 2022 Oral) Towards Practical Deployment-Stage Backdoor Attack on Deep Neural Networks.☆26Updated 2 years ago
- ☆10Updated 8 months ago
- [KDD 2022] "Bilateral Dependency Optimization: Defending Against Model-inversion Attacks"☆22Updated 10 months ago