APBench: A Unified Availability Poisoning Attack and Defenses Benchmark (TMLR 08/2024)
☆46Apr 15, 2025Updated 10 months ago
Alternatives and similar repositories for apbench
Users that are interested in apbench are comparing it to the libraries listed below
Sorting:
- PyTorch implementation of our ICLR 2023 paper titled "Is Adversarial Training Really a Silver Bullet for Mitigating Data Poisoning?".☆12Mar 13, 2023Updated 2 years ago
- Image Shortcut Squeezing: Countering Perturbative Availability Poisons with Compression☆14Mar 22, 2025Updated 11 months ago
- ☆19Jun 5, 2023Updated 2 years ago
- Code for "Purify Unlearnable Examples via Rate-Constrained Variational Autoencoders" at ICML 2024☆10Sep 18, 2025Updated 5 months ago
- [ICLR 2023, Spotlight] Indiscriminate Poisoning Attacks on Unsupervised Contrastive Learning☆33Dec 2, 2023Updated 2 years ago
- [ICLR 2022] Official repository for "Robust Unlearnable Examples: Protecting Data Against Adversarial Learning"☆49Jul 20, 2024Updated last year
- Code for the paper "Autoregressive Perturbations for Data Poisoning" (NeurIPS 2022)☆20Sep 9, 2024Updated last year
- Unlearnable Examples Give a False Sense of Security: Piercing through Unexploitable Data with Learnable Examples☆11Oct 14, 2024Updated last year
- This is the official code implementation of A Survey on Unlearnable Data.☆25Apr 4, 2025Updated 11 months ago
- LAFEAT: Piercing Through Adversarial Defenses with Latent Features (CVPR 2021 Oral)☆27Jun 23, 2021Updated 4 years ago
- ☆20Oct 28, 2025Updated 4 months ago
- Code for the paper "Evading Black-box Classifiers Without Breaking Eggs" [SaTML 2024]☆21Apr 15, 2024Updated last year
- ☆54Sep 11, 2021Updated 4 years ago
- Code of paper [CVPR'24: Can Protective Perturbation Safeguard Personal Data from Being Exploited by Stable Diffusion?]☆23Apr 2, 2024Updated last year
- ☆12May 6, 2022Updated 3 years ago
- [NeurIPS 2021] Better Safe Than Sorry: Preventing Delusive Adversaries with Adversarial Training☆32Jan 9, 2022Updated 4 years ago
- [MM '24] EvilEdit: Backdooring Text-to-Image Diffusion Models in One Second☆28Nov 19, 2024Updated last year
- On the Robustness of GUI Grounding Models Against Image Attacks☆12Apr 8, 2025Updated 10 months ago
- Code for Transferable Unlearnable Examples☆22Mar 11, 2023Updated 2 years ago
- [ICLR2021] Unlearnable Examples: Making Personal Data Unexploitable☆170Jul 5, 2024Updated last year
- An Embarrassingly Simple Backdoor Attack on Self-supervised Learning☆20Jan 24, 2024Updated 2 years ago
- CVPR2023: Unlearnable Clusters: Towards Label-agnostic Unlearnable Examples☆22Apr 25, 2023Updated 2 years ago
- The implementation of our IEEE S&P 2024 paper "Securely Fine-tuning Pre-trained Encoders Against Adversarial Examples".☆11Jun 28, 2024Updated last year
- ☆14Feb 26, 2025Updated last year
- SaTML 2023, 1st place in CVPR’21 Security AI Challenger: Unrestricted Adversarial Attacks on ImageNet.☆27Dec 29, 2022Updated 3 years ago
- Implementations of data poisoning attacks against neural networks and related defenses.☆104Jul 16, 2024Updated last year
- PyTorch implementation of BPDA+EOT attack to evaluate adversarial defense with an EBM☆26Jun 30, 2020Updated 5 years ago
- Code for Friendly Noise against Adversarial Noise: A Powerful Defense against Data Poisoning Attacks (NeurIPS 2022)☆10Jul 20, 2023Updated 2 years ago
- [NeurIPS 2024] "Membership Inference on Text-to-image Diffusion Models via Conditional Likelihood Discrepancy"☆12Sep 15, 2025Updated 5 months ago
- One-Pixel Shortcut: on the Learning Preference of Deep Neural Networks (ICLR 2023 Spotlight)☆14Sep 28, 2025Updated 5 months ago
- Code for the CVPR '23 paper, "Defending Against Patch-based Backdoor Attacks on Self-Supervised Learning"☆10Jun 9, 2023Updated 2 years ago
- All code and data necessary to replicate experiments in the paper BAGM: A Backdoor Attack for Manipulating Text-to-Image Generative Model…☆13Sep 16, 2024Updated last year
- ☆10Jul 28, 2022Updated 3 years ago
- AdvDoor: Adversarial Backdoor Attack of Deep Learning System☆32Nov 5, 2024Updated last year
- [TPAMI 2019] The implementation for "Direction Concentration Learning: Enhancing Congruency in Machine Learning"☆23Jan 30, 2020Updated 6 years ago
- [BMVC 2023] Semantic Adversarial Attacks via Diffusion Models☆25Nov 30, 2023Updated 2 years ago
- Official implementation of the ICCV2023 paper: Enhancing Generalization of Universal Adversarial Perturbation through Gradient Aggregatio…☆27Aug 17, 2023Updated 2 years ago
- [NeurIPS'22] Trap and Replace: Defending Backdoor Attacks by Trapping Them into an Easy-to-Replace Subnetwork. Haotao Wang, Junyuan Hong,…☆15Nov 27, 2023Updated 2 years ago
- ☆33Nov 27, 2023Updated 2 years ago