WenRuiUSTC / EntF
PyTorch implementation of our ICLR 2023 paper titled "Is Adversarial Training Really a Silver Bullet for Mitigating Data Poisoning?".
☆12Updated last year
Alternatives and similar repositories for EntF:
Users that are interested in EntF are comparing it to the libraries listed below
- ☆19Updated 2 years ago
- Github repo for One-shot Neural Backdoor Erasing via Adversarial Weight Masking (NeurIPS 2022)☆14Updated 2 years ago
- Simple yet effective targeted transferable attack (NeurIPS 2021)☆48Updated 2 years ago
- ☆17Updated 3 years ago
- ☆18Updated 2 years ago
- The official code of IEEE S&P 2024 paper "Why Does Little Robustness Help? A Further Step Towards Understanding Adversarial Transferabili…☆17Updated 5 months ago
- Boosting the Transferability of Adversarial Attacks with Reverse Adversarial Perturbation (NeurIPS 2022)☆33Updated 2 years ago
- Code for our ICLR 2023 paper Making Substitute Models More Bayesian Can Enhance Transferability of Adversarial Examples.☆18Updated last year
- Code for the paper "Autoregressive Perturbations for Data Poisoning" (NeurIPS 2022)☆19Updated 5 months ago
- Source of the ECCV22 paper "LGV: Boosting Adversarial Example Transferability from Large Geometric Vicinity"☆19Updated last year
- APBench: A Unified Availability Poisoning Attack and Defenses Benchmark (TMLR 08/2024)☆28Updated last month
- Code for identifying natural backdoors in existing image datasets.☆15Updated 2 years ago
- [ICLR'21] Dataset Inference for Ownership Resolution in Machine Learning☆32Updated 2 years ago
- ☆11Updated last year
- Official Implementation of ICLR 2022 paper, ``Adversarial Unlearning of Backdoors via Implicit Hypergradient''☆54Updated 2 years ago
- This is the implementation for CVPR 2022 Oral paper "Better Trigger Inversion Optimization in Backdoor Scanning."☆24Updated 2 years ago
- ReColorAdv and other attacks from the NeurIPS 2019 paper "Functional Adversarial Attacks"☆37Updated 2 years ago
- ☆13Updated 3 years ago
- ☆11Updated 2 years ago
- ☆11Updated 3 years ago
- ☆21Updated 2 years ago
- [ICLR2023] Distilling Cognitive Backdoor Patterns within an Image☆32Updated 4 months ago
- ☆17Updated last year
- Official code for "Boosting the Adversarial Transferability of Surrogate Model with Dark Knowledge"☆11Updated last year
- [AAAI'21] Deep Feature Space Trojan Attack of Neural Networks by Controlled Detoxification☆28Updated last month
- This is the implementation of our paper 'Open-sourced Dataset Protection via Backdoor Watermarking', accepted by the NeurIPS Workshop on …☆19Updated 3 years ago
- ☆21Updated 4 years ago
- Backdoor Safety Tuning (NeurIPS 2023 & 2024 Spotlight)☆25Updated 3 months ago
- Code for our NeurIPS 2020 paper Practical No-box Adversarial Attacks against DNNs.☆33Updated 4 years ago
- Codes for CVPR2020 paper "Towards Transferable Targeted Attack".☆15Updated 2 years ago