xpf / Data-Efficient-Backdoor-AttacksLinks
Data-Efficient Backdoor Attacks
☆18Updated 3 years ago
Alternatives and similar repositories for Data-Efficient-Backdoor-Attacks
Users that are interested in Data-Efficient-Backdoor-Attacks are comparing it to the libraries listed below
Sorting:
- ☆11Updated 3 years ago
- Official repository for CVPR 2022 paper 'Boosting Black-Box Attack with Partially Transferred Conditional Adversarial Distribution'☆27Updated 3 years ago
- Implementation of ECCV 2020 "Sparse Adversarial Attack via Perturbation Factorization"☆27Updated 4 years ago
- ☆19Updated 3 years ago
- A Unified Approach to Interpreting and Boosting Adversarial Transferability (ICLR2021)☆29Updated 3 years ago
- Code for our ICLR 2023 paper Making Substitute Models More Bayesian Can Enhance Transferability of Adversarial Examples.☆18Updated 2 years ago
- This is the official code for "Revisiting Adversarial Robustness Distillation: Robust Soft Labels Make Student Better"☆41Updated 3 years ago
- Github repo for One-shot Neural Backdoor Erasing via Adversarial Weight Masking (NeurIPS 2022)☆15Updated 2 years ago
- Boosting the Transferability of Adversarial Attacks with Reverse Adversarial Perturbation (NeurIPS 2022)☆33Updated 2 years ago
- Code for Prior-Guided Adversarial Initialization for Fast Adversarial Training (ECCV2022)☆26Updated 2 years ago
- ☆21Updated 3 years ago
- SEAT☆21Updated last year
- Triangle Attack: A Query-efficient Decision-based Adversarial Attack (ECCV 2022)☆17Updated 2 years ago
- Code for Boosting fast adversarial training with learnable adversarial initialization (TIP2022)☆29Updated last year
- LiangSiyuan21 / Parallel-Rectangle-Flip-Attack-A-Query-based-Black-box-Attack-against-Object-DetectionA Implementation of ICCV-2021(Parallel Rectangle Flip Attack: A Query-based Black-box Attack against Object Detection)☆28Updated 3 years ago
- ReColorAdv and other attacks from the NeurIPS 2019 paper "Functional Adversarial Attacks"☆37Updated 3 years ago
- [NeurIPS2021] Code Release of Learning Transferable Perturbations☆28Updated 7 months ago
- ☆58Updated 2 years ago
- PyTorch implementation of BPDA+EOT attack to evaluate adversarial defense with an EBM☆25Updated 5 years ago
- Towards Defending against Adversarial Examples via Attack-Invariant Features☆12Updated last year
- Code for identifying natural backdoors in existing image datasets.☆15Updated 2 years ago
- ☆7Updated 2 years ago
- ☆11Updated 3 years ago
- Unlearnable Examples Give a False Sense of Security: Piercing through Unexploitable Data with Learnable Examples☆10Updated 9 months ago
- Official Implementation of NIPS 2022 paper Pre-activation Distributions Expose Backdoor Neurons☆14Updated 2 years ago
- Source of the ECCV22 paper "LGV: Boosting Adversarial Example Transferability from Large Geometric Vicinity"☆18Updated 4 months ago
- ☆10Updated 3 years ago
- Source code for ECCV 2022 Poster: Data-free Backdoor Removal based on Channel Lipschitzness☆33Updated 2 years ago
- Pytorch implementation of NPAttack☆12Updated 5 years ago
- Code for our NeurIPS 2020 paper Backpropagating Linearly Improves Transferability of Adversarial Examples.☆42Updated 2 years ago