HanxunH / Unlearnable-Examples
[ICLR2021] Unlearnable Examples: Making Personal Data Unexploitable
☆159Updated 6 months ago
Alternatives and similar repositories for Unlearnable-Examples:
Users that are interested in Unlearnable-Examples are comparing it to the libraries listed below
- Code for ICLR2020 "Improving Adversarial Robustness Requires Revisiting Misclassified Examples"☆144Updated 4 years ago
- ☆64Updated 11 months ago
- WaNet - Imperceptible Warping-based Backdoor Attack (ICLR 2021)☆116Updated 2 months ago
- ☆57Updated 2 years ago
- ☆48Updated 3 years ago
- Official Implementation of ICLR 2022 paper, ``Adversarial Unlearning of Backdoors via Implicit Hypergradient''☆54Updated 2 years ago
- ☆41Updated last year
- Witches' Brew: Industrial Scale Data Poisoning via Gradient Matching☆96Updated 5 months ago
- Revisiting Transferable Adversarial Images (arXiv)☆118Updated 3 months ago
- Code for "On Adaptive Attacks to Adversarial Example Defenses"☆85Updated 3 years ago
- This is an implementation demo of the ICLR 2021 paper [Neural Attention Distillation: Erasing Backdoor Triggers from Deep Neural Networks…☆120Updated 3 years ago
- Simple yet effective targeted transferable attack (NeurIPS 2021)☆48Updated 2 years ago
- the paper "Geometry-aware Instance-reweighted Adversarial Training" ICLR 2021 oral☆59Updated 3 years ago
- ☆50Updated 3 years ago
- ☆79Updated 3 years ago
- Code for "Label-Consistent Backdoor Attacks"☆52Updated 4 years ago
- A pytorch implementation of "Towards Evaluating the Robustness of Neural Networks"☆55Updated 5 years ago
- [ICLR2023] Distilling Cognitive Backdoor Patterns within an Image☆32Updated 3 months ago
- Unofficial implementation of the DeepMind papers "Uncovering the Limits of Adversarial Training against Norm-Bounded Adversarial Examples…☆95Updated 2 years ago
- Code for the paper "Better Diffusion Models Further Improve Adversarial Training" (ICML 2023)☆132Updated last year
- Attacking a dog vs fish classification that uses transfer learning inceptionV3☆71Updated 6 years ago
- Universal Adversarial Perturbations (UAPs) for PyTorch☆48Updated 3 years ago
- Codes for ICLR 2020 paper "Skip Connections Matter: On the Transferability of Adversarial Examples Generated with ResNets"☆70Updated 4 years ago
- A pytorch implementation of "Towards Deep Learning Models Resistant to Adversarial Attacks"☆149Updated 5 years ago
- Input-aware Dynamic Backdoor Attack (NeurIPS 2020)☆34Updated 5 months ago
- Prediction Poisoning: Towards Defenses Against DNN Model Stealing Attacks (ICLR '20)☆29Updated 4 years ago
- Attacks Which Do Not Kill Training Make Adversarial Learning Stronger (ICML2020 Paper)☆124Updated last year
- ICCV 2021, We find most existing triggers of backdoor attacks in deep learning contain severe artifacts in the frequency domain. This Rep…☆43Updated 2 years ago
- CVPR 2021 Official repository for the Data-Free Model Extraction paper. https://arxiv.org/abs/2011.14779☆69Updated 9 months ago
- [ICCV 2019] Enhancing Adversarial Example Transferability with an Intermediate Level Attack (https://arxiv.org/abs/1907.10823)☆77Updated 5 years ago