APBench: A Unified Availability Poisoning Attack and Defenses Benchmark (TMLR 08/2024)
☆46Apr 15, 2025Updated 11 months ago
Alternatives and similar repositories for apbench
Users that are interested in apbench are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Image Shortcut Squeezing: Countering Perturbative Availability Poisons with Compression☆14Mar 22, 2025Updated last year
- LAFEAT: Piercing Through Adversarial Defenses with Latent Features (CVPR 2021 Oral)☆27Jun 23, 2021Updated 4 years ago
- PyTorch implementation of our ICLR 2023 paper titled "Is Adversarial Training Really a Silver Bullet for Mitigating Data Poisoning?".☆12Mar 13, 2023Updated 3 years ago
- Code for "Purify Unlearnable Examples via Rate-Constrained Variational Autoencoders" at ICML 2024☆10Sep 18, 2025Updated 6 months ago
- This is the official code implementation of A Survey on Unlearnable Data.☆26Apr 4, 2025Updated 11 months ago
- [ICLR 2023, Spotlight] Indiscriminate Poisoning Attacks on Unsupervised Contrastive Learning☆31Dec 2, 2023Updated 2 years ago
- Code for the paper "Autoregressive Perturbations for Data Poisoning" (NeurIPS 2022)☆20Sep 9, 2024Updated last year
- [ICLR 2022] Official repository for "Robust Unlearnable Examples: Protecting Data Against Adversarial Learning"☆50Jul 20, 2024Updated last year
- ☆19Jun 5, 2023Updated 2 years ago
- Unlearnable Examples Give a False Sense of Security: Piercing through Unexploitable Data with Learnable Examples☆11Oct 14, 2024Updated last year
- [NeurIPS 2021] Better Safe Than Sorry: Preventing Delusive Adversaries with Adversarial Training☆32Jan 9, 2022Updated 4 years ago
- ☆54Sep 11, 2021Updated 4 years ago
- Code for the paper "Evading Black-box Classifiers Without Breaking Eggs" [SaTML 2024]☆21Apr 15, 2024Updated last year
- One-Pixel Shortcut: on the Learning Preference of Deep Neural Networks (ICLR 2023 Spotlight)☆14Sep 28, 2025Updated 5 months ago
- CVPR2023: Unlearnable Clusters: Towards Label-agnostic Unlearnable Examples☆22Apr 25, 2023Updated 2 years ago
- Code for Transferable Unlearnable Examples☆22Mar 11, 2023Updated 3 years ago
- [ICLR2021] Unlearnable Examples: Making Personal Data Unexploitable☆171Jul 5, 2024Updated last year
- ☆20Oct 28, 2025Updated 4 months ago
- Code of paper [CVPR'24: Can Protective Perturbation Safeguard Personal Data from Being Exploited by Stable Diffusion?]☆23Apr 2, 2024Updated last year
- SaTML 2023, 1st place in CVPR’21 Security AI Challenger: Unrestricted Adversarial Attacks on ImageNet.☆27Dec 29, 2022Updated 3 years ago
- [MM '24] EvilEdit: Backdooring Text-to-Image Diffusion Models in One Second☆28Nov 19, 2024Updated last year
- [NeurIPS'22] Trap and Replace: Defending Backdoor Attacks by Trapping Them into an Easy-to-Replace Subnetwork. Haotao Wang, Junyuan Hong,…☆15Nov 27, 2023Updated 2 years ago
- All code and data necessary to replicate experiments in the paper BAGM: A Backdoor Attack for Manipulating Text-to-Image Generative Model…☆13Sep 16, 2024Updated last year
- A curated list of awesome Unlearnable Example papers resources.☆13Dec 14, 2025Updated 3 months ago
- Implementations of data poisoning attacks against neural networks and related defenses.☆105Jul 16, 2024Updated last year
- [BMVC 2023] Semantic Adversarial Attacks via Diffusion Models☆25Nov 30, 2023Updated 2 years ago
- Code for Friendly Noise against Adversarial Noise: A Powerful Defense against Data Poisoning Attacks (NeurIPS 2022)☆10Jul 20, 2023Updated 2 years ago
- AdvDoor: Adversarial Backdoor Attack of Deep Learning System☆32Nov 5, 2024Updated last year
- An Embarrassingly Simple Backdoor Attack on Self-supervised Learning☆20Jan 24, 2024Updated 2 years ago
- Spectrum simulation attack (ECCV'2022 Oral) towards boosting the transferability of adversarial examples☆116Jul 21, 2022Updated 3 years ago
- ☆10Jul 28, 2022Updated 3 years ago
- ReColorAdv and other attacks from the NeurIPS 2019 paper "Functional Adversarial Attacks"☆38May 31, 2022Updated 3 years ago
- A curated list of papers & resources linked to data poisoning, backdoor attacks and defenses against them (no longer maintained)☆287Jan 11, 2025Updated last year
- Repository for the Paper: Refusing Safe Prompts for Multi-modal Large Language Models☆18Oct 16, 2024Updated last year
- PyTorch implementation of BPDA+EOT attack to evaluate adversarial defense with an EBM☆27Jun 30, 2020Updated 5 years ago
- Official implementation of the ICCV2023 paper: Enhancing Generalization of Universal Adversarial Perturbation through Gradient Aggregatio…☆28Aug 17, 2023Updated 2 years ago
- ☆12May 6, 2022Updated 3 years ago
- official repository for the NeurIPS 2022 paper "Adversarial Attack on Attackers: Post-Process to Mitigate Black-Box Score-Based Query Att…☆20Oct 28, 2022Updated 3 years ago
- Boosting the Transferability of Adversarial Attacks with Reverse Adversarial Perturbation (NeurIPS 2022)☆33Dec 16, 2022Updated 3 years ago