lhfowl / adversarial_poisonsView external linksLinks
☆54Sep 11, 2021Updated 4 years ago
Alternatives and similar repositories for adversarial_poisons
Users that are interested in adversarial_poisons are comparing it to the libraries listed below
Sorting:
- ☆24Jan 27, 2022Updated 4 years ago
- ☆26Dec 14, 2021Updated 4 years ago
- Code for the paper "Autoregressive Perturbations for Data Poisoning" (NeurIPS 2022)☆20Sep 9, 2024Updated last year
- [NeurIPS 2021] Better Safe Than Sorry: Preventing Delusive Adversaries with Adversarial Training☆32Jan 9, 2022Updated 4 years ago
- PyTorch implementation of our ICLR 2023 paper titled "Is Adversarial Training Really a Silver Bullet for Mitigating Data Poisoning?".☆12Mar 13, 2023Updated 2 years ago
- [ICLR2021] Unlearnable Examples: Making Personal Data Unexploitable☆169Jul 5, 2024Updated last year
- Witches' Brew: Industrial Scale Data Poisoning via Gradient Matching☆111Aug 19, 2024Updated last year
- ☆16Jul 17, 2022Updated 3 years ago
- ☆69Feb 17, 2024Updated last year
- Pytorch ImageNet1k Loader with Bounding Boxes.☆13Jan 23, 2022Updated 4 years ago
- ☆33Nov 27, 2023Updated 2 years ago
- [ICLR 2022] Official repository for "Robust Unlearnable Examples: Protecting Data Against Adversarial Learning"