WenRuiUSTC / EntFLinks
PyTorch implementation of our ICLR 2023 paper titled "Is Adversarial Training Really a Silver Bullet for Mitigating Data Poisoning?".
☆12Updated 2 years ago
Alternatives and similar repositories for EntF
Users that are interested in EntF are comparing it to the libraries listed below
Sorting:
- Simple yet effective targeted transferable attack (NeurIPS 2021)☆51Updated 2 years ago
- Code for our ICLR 2023 paper Making Substitute Models More Bayesian Can Enhance Transferability of Adversarial Examples.☆18Updated 2 years ago
- Boosting the Transferability of Adversarial Attacks with Reverse Adversarial Perturbation (NeurIPS 2022)☆33Updated 2 years ago
- Github repo for One-shot Neural Backdoor Erasing via Adversarial Weight Masking (NeurIPS 2022)☆15Updated 2 years ago
- ☆19Updated 3 years ago
- Code for identifying natural backdoors in existing image datasets.☆15Updated 3 years ago
- Code for the paper "Autoregressive Perturbations for Data Poisoning" (NeurIPS 2022)☆20Updated last year
- Universal Adversarial Perturbations (UAPs) for PyTorch☆49Updated 4 years ago
- ☆18Updated 3 years ago
- ☆16Updated 3 years ago
- ☆27Updated 2 years ago
- Code for Transferable Unlearnable Examples☆21Updated 2 years ago
- Code for our NeurIPS 2020 paper Practical No-box Adversarial Attacks against DNNs.☆34Updated 4 years ago
- APBench: A Unified Availability Poisoning Attack and Defenses Benchmark (TMLR 08/2024)☆35Updated 5 months ago
- LiangSiyuan21 / Parallel-Rectangle-Flip-Attack-A-Query-based-Black-box-Attack-against-Object-DetectionA Implementation of ICCV-2021(Parallel Rectangle Flip Attack: A Query-based Black-box Attack against Object Detection)☆28Updated 4 years ago
- Defending against Model Stealing via Verifying Embedded External Features☆38Updated 3 years ago
- [AAAI'21] Deep Feature Space Trojan Attack of Neural Networks by Controlled Detoxification☆29Updated 8 months ago
- The official code of IEEE S&P 2024 paper "Why Does Little Robustness Help? A Further Step Towards Understanding Adversarial Transferabili…☆19Updated last year
- Source code for ECCV 2022 Poster: Data-free Backdoor Removal based on Channel Lipschitzness☆34Updated 2 years ago
- [NeurIPS 2021] Better Safe Than Sorry: Preventing Delusive Adversaries with Adversarial Training☆32Updated 3 years ago
- Implementation of Targeted Backdoor Attacks on Deep Learning Systems Using Data Poisoning paper☆20Updated 5 years ago
- ☆19Updated 3 years ago
- ☆21Updated 3 years ago
- ReColorAdv and other attacks from the NeurIPS 2019 paper "Functional Adversarial Attacks"☆38Updated 3 years ago
- Code for Prior-Guided Adversarial Initialization for Fast Adversarial Training (ECCV2022)☆26Updated 2 years ago
- [CVPR 2023] Backdoor Defense via Adaptively Splitting Poisoned Dataset☆48Updated last year
- ☆45Updated last year
- Source of the ECCV22 paper "LGV: Boosting Adversarial Example Transferability from Large Geometric Vicinity"☆18Updated 6 months ago
- Sparse and Imperceivable Adversarial Attacks (accepted to ICCV 2019).☆41Updated 4 years ago
- Official Implementation of ICLR 2022 paper, ``Adversarial Unlearning of Backdoors via Implicit Hypergradient''☆53Updated 2 years ago