WenRuiUSTC / EntFLinks
PyTorch implementation of our ICLR 2023 paper titled "Is Adversarial Training Really a Silver Bullet for Mitigating Data Poisoning?".
☆12Updated 2 years ago
Alternatives and similar repositories for EntF
Users that are interested in EntF are comparing it to the libraries listed below
Sorting:
- Simple yet effective targeted transferable attack (NeurIPS 2021)☆51Updated 2 years ago
- Boosting the Transferability of Adversarial Attacks with Reverse Adversarial Perturbation (NeurIPS 2022)☆33Updated 2 years ago
- Code for our ICLR 2023 paper Making Substitute Models More Bayesian Can Enhance Transferability of Adversarial Examples.☆18Updated 2 years ago
- Code for identifying natural backdoors in existing image datasets.☆15Updated 3 years ago
- ☆19Updated 3 years ago
- Code for the paper "Autoregressive Perturbations for Data Poisoning" (NeurIPS 2022)☆20Updated 11 months ago
- ☆18Updated 3 years ago
- Github repo for One-shot Neural Backdoor Erasing via Adversarial Weight Masking (NeurIPS 2022)☆15Updated 2 years ago
- [NeurIPS 2021] Better Safe Than Sorry: Preventing Delusive Adversaries with Adversarial Training☆32Updated 3 years ago
- APBench: A Unified Availability Poisoning Attack and Defenses Benchmark (TMLR 08/2024)☆33Updated 4 months ago
- ☆45Updated last year
- ☆16Updated 3 years ago
- ReColorAdv and other attacks from the NeurIPS 2019 paper "Functional Adversarial Attacks"☆37Updated 3 years ago
- A Unified Approach to Interpreting and Boosting Adversarial Transferability (ICLR2021)☆31Updated 3 years ago
- Code for Prior-Guided Adversarial Initialization for Fast Adversarial Training (ECCV2022)☆26Updated 2 years ago
- Code for our NeurIPS 2020 paper Backpropagating Linearly Improves Transferability of Adversarial Examples.☆42Updated 2 years ago
- The official code of IEEE S&P 2024 paper "Why Does Little Robustness Help? A Further Step Towards Understanding Adversarial Transferabili…☆19Updated last year
- ☆21Updated 3 years ago
- ☆19Updated 3 years ago
- Official code for "Boosting the Adversarial Transferability of Surrogate Model with Dark Knowledge"☆11Updated last year
- Code for our NeurIPS 2020 paper Practical No-box Adversarial Attacks against DNNs.☆34Updated 4 years ago
- Source code for ECCV 2022 Poster: Data-free Backdoor Removal based on Channel Lipschitzness☆33Updated 2 years ago
- Defending against Model Stealing via Verifying Embedded External Features☆38Updated 3 years ago
- Source of the ECCV22 paper "LGV: Boosting Adversarial Example Transferability from Large Geometric Vicinity"☆18Updated 5 months ago
- Universal Adversarial Perturbations (UAPs) for PyTorch☆48Updated 4 years ago
- CVPR 2021 Official repository for the Data-Free Model Extraction paper. https://arxiv.org/abs/2011.14779☆72Updated last year
- [ICLR2023] Distilling Cognitive Backdoor Patterns within an Image☆36Updated 10 months ago
- ☆27Updated 2 years ago
- Code Repository for the Paper ---Revisiting the Assumption of Latent Separability for Backdoor Defenses (ICLR 2023)☆44Updated 2 years ago
- LiangSiyuan21 / Parallel-Rectangle-Flip-Attack-A-Query-based-Black-box-Attack-against-Object-DetectionA Implementation of ICCV-2021(Parallel Rectangle Flip Attack: A Query-based Black-box Attack against Object Detection)☆28Updated 4 years ago