[ICLR 2022] Official repository for "Robust Unlearnable Examples: Protecting Data Against Adversarial Learning"
☆49Jul 20, 2024Updated last year
Alternatives and similar repositories for robust-unlearnable-examples
Users that are interested in robust-unlearnable-examples are comparing it to the libraries listed below
Sorting:
- [ICLR 2022] Official repository for "Knowledge Removal in Sampling-based Bayesian Inference"☆18Mar 15, 2022Updated 3 years ago
- ☆21Jan 28, 2023Updated 3 years ago
- Code for the paper "Autoregressive Perturbations for Data Poisoning" (NeurIPS 2022)☆20Sep 9, 2024Updated last year
- The official implementation of the paper "Topology-aware Generalization of Decentralized SGD"☆37Mar 29, 2023Updated 2 years ago
- [ECCV 2022] Code for the paper, ReAct: Temporal Action Detection with Relational Queries☆39Oct 19, 2022Updated 3 years ago
- Image Shortcut Squeezing: Countering Perturbative Availability Poisons with Compression☆14Mar 22, 2025Updated 11 months ago
- PyTorch implementation of our ICLR 2023 paper titled "Is Adversarial Training Really a Silver Bullet for Mitigating Data Poisoning?".☆12Mar 13, 2023Updated 2 years ago
- [ICLR 2023, Spotlight] Indiscriminate Poisoning Attacks on Unsupervised Contrastive Learning☆33Dec 2, 2023Updated 2 years ago
- [ICLR2021] Unlearnable Examples: Making Personal Data Unexploitable☆170Jul 5, 2024Updated last year
- Unlearnable Examples Give a False Sense of Security: Piercing through Unexploitable Data with Learnable Examples☆11Oct 14, 2024Updated last year
- ☆54Sep 11, 2021Updated 4 years ago
- APBench: A Unified Availability Poisoning Attack and Defenses Benchmark (TMLR 08/2024)☆46Apr 15, 2025Updated 10 months ago
- [NeurIPS 2021] Better Safe Than Sorry: Preventing Delusive Adversaries with Adversarial Training☆32Jan 9, 2022Updated 4 years ago
- One-Pixel Shortcut: on the Learning Preference of Deep Neural Networks (ICLR 2023 Spotlight)☆14Sep 28, 2025Updated 5 months ago
- ☆10Jul 28, 2022Updated 3 years ago
- ☆52Oct 8, 2024Updated last year
- CVPR2023: Unlearnable Clusters: Towards Label-agnostic Unlearnable Examples☆22Apr 25, 2023Updated 2 years ago
- [CVPR'24 Oral] Metacloak: Preventing Unauthorized Subject-driven Text-to-image Diffusion-based Synthesis via Meta-learning☆28Nov 19, 2024Updated last year
- A curated list of awesome Unlearnable Example papers resources.☆13Dec 14, 2025Updated 2 months ago
- ☆20Oct 28, 2025Updated 4 months ago
- ☆28Aug 21, 2024Updated last year
- [Machine Learning 2023] Imbalanced Gradients: A Subtle Cause of Overestimated Adversarial Robustness☆17Jul 5, 2024Updated last year
- Codes for CVPR2020 paper "Towards Transferable Targeted Attack".☆15Apr 24, 2022Updated 3 years ago
- Code for our ICLR 2023 paper Making Substitute Models More Bayesian Can Enhance Transferability of Adversarial Examples.☆18May 31, 2023Updated 2 years ago
- Code of paper [CVPR'24: Can Protective Perturbation Safeguard Personal Data from Being Exploited by Stable Diffusion?]☆23Apr 2, 2024Updated last year
- Simple yet effective targeted transferable attack (NeurIPS 2021)☆51Nov 17, 2022Updated 3 years ago
- This is the official code implementation of A Survey on Unlearnable Data.☆25Apr 4, 2025Updated 10 months ago
- Implementation of BadCLIP https://arxiv.org/pdf/2311.16194.pdf☆23Mar 23, 2024Updated last year
- Code for "Neural Tangent Generalization Attacks" (ICML 2021)☆41Jul 29, 2021Updated 4 years ago
- A curated list of papers for the transferability of adversarial examples☆76Jul 8, 2024Updated last year
- Implementation for <Understanding Robust Overftting of Adversarial Training and Beyond> in ICML'22.☆12Jul 1, 2022Updated 3 years ago
- Code for our NeurIPS 2020 paper Backpropagating Linearly Improves Transferability of Adversarial Examples.☆42Feb 10, 2023Updated 3 years ago
- A repository for the query-efficient black-box attack, SignHunter☆23Jan 15, 2020Updated 6 years ago
- A list of papers in NeurIPS 2022 related to adversarial attack and defense / AI security.☆75Dec 5, 2022Updated 3 years ago
- AnyDoor: Test-Time Backdoor Attacks on Multimodal Large Language Models☆60Apr 8, 2024Updated last year
- Witches' Brew: Industrial Scale Data Poisoning via Gradient Matching☆112Aug 19, 2024Updated last year
- On the effectiveness of adversarial training against common corruptions [UAI 2022]☆30May 16, 2022Updated 3 years ago
- [NeurIPS 2023] Boosting Adversarial Transferability by Achieving Flat Local Maxima☆34Feb 23, 2024Updated 2 years ago
- Tensorflow implementation of "Defense against Universal Adversarial Perturbations"☆10Apr 16, 2018Updated 7 years ago