APBench: A Unified Availability Poisoning Attack and Defenses Benchmark (TMLR 08/2024)
☆46Apr 15, 2025Updated last year
Alternatives and similar repositories for apbench
Users that are interested in apbench are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Image Shortcut Squeezing: Countering Perturbative Availability Poisons with Compression☆14Mar 22, 2025Updated last year
- LAFEAT: Piercing Through Adversarial Defenses with Latent Features (CVPR 2021 Oral)☆27Jun 23, 2021Updated 4 years ago
- PyTorch implementation of our ICLR 2023 paper titled "Is Adversarial Training Really a Silver Bullet for Mitigating Data Poisoning?".☆12Mar 13, 2023Updated 3 years ago
- Code for "Purify Unlearnable Examples via Rate-Constrained Variational Autoencoders" at ICML 2024☆10Sep 18, 2025Updated 7 months ago
- This is the official code implementation of A Survey on Unlearnable Data.☆26Apr 23, 2026Updated last week
- Deploy on Railway without the complexity - Free Credits Offer • AdConnect your repo and Railway handles the rest with instant previews. Quickly provision container image services, databases, and storage volumes.
- [ICLR 2023, Spotlight] Indiscriminate Poisoning Attacks on Unsupervised Contrastive Learning☆31Dec 2, 2023Updated 2 years ago
- Code for the paper "Autoregressive Perturbations for Data Poisoning" (NeurIPS 2022)☆20Sep 9, 2024Updated last year
- [ICLR 2022] Official repository for "Robust Unlearnable Examples: Protecting Data Against Adversarial Learning"☆49Jul 20, 2024Updated last year
- ☆20Jun 5, 2023Updated 2 years ago
- Unlearnable Examples Give a False Sense of Security: Piercing through Unexploitable Data with Learnable Examples☆11Oct 14, 2024Updated last year
- [NeurIPS 2021] Better Safe Than Sorry: Preventing Delusive Adversaries with Adversarial Training☆32Jan 9, 2022Updated 4 years ago
- ☆54Sep 11, 2021Updated 4 years ago
- Code for the paper "Evading Black-box Classifiers Without Breaking Eggs" [SaTML 2024]☆21Apr 15, 2024Updated 2 years ago
- [Findings of EMNLP 2022] Expose Backdoors on the Way: A Feature-Based Efficient Defense against Textual Backdoor Attacks☆13Feb 26, 2023Updated 3 years ago
- Managed Database hosting by DigitalOcean • AdPostgreSQL, MySQL, MongoDB, Kafka, Valkey, and OpenSearch available. Automatically scale up storage and focus on building your apps.
- One-Pixel Shortcut: on the Learning Preference of Deep Neural Networks (ICLR 2023 Spotlight)☆14Sep 28, 2025Updated 7 months ago
- CVPR2023: Unlearnable Clusters: Towards Label-agnostic Unlearnable Examples☆22Apr 25, 2023Updated 3 years ago
- Code for Transferable Unlearnable Examples☆22Mar 11, 2023Updated 3 years ago
- [ICLR2021] Unlearnable Examples: Making Personal Data Unexploitable☆174Jul 5, 2024Updated last year
- ☆20Oct 28, 2025Updated 6 months ago
- SaTML 2023, 1st place in CVPR’21 Security AI Challenger: Unrestricted Adversarial Attacks on ImageNet.☆27Dec 29, 2022Updated 3 years ago
- [MM '24] EvilEdit: Backdooring Text-to-Image Diffusion Models in One Second☆27Nov 19, 2024Updated last year
- [NeurIPS'22] Trap and Replace: Defending Backdoor Attacks by Trapping Them into an Easy-to-Replace Subnetwork. Haotao Wang, Junyuan Hong,…☆14Nov 27, 2023Updated 2 years ago
- Code of paper [CVPR'24: Can Protective Perturbation Safeguard Personal Data from Being Exploited by Stable Diffusion?]☆25Apr 2, 2024Updated 2 years ago
- Deploy on Railway without the complexity - Free Credits Offer • AdConnect your repo and Railway handles the rest with instant previews. Quickly provision container image services, databases, and storage volumes.
- A curated list of awesome Unlearnable Example papers resources.☆13Dec 14, 2025Updated 4 months ago
- All code and data necessary to replicate experiments in the paper BAGM: A Backdoor Attack for Manipulating Text-to-Image Generative Model…☆13Sep 16, 2024Updated last year
- Implementations of data poisoning attacks against neural networks and related defenses.☆105Jul 16, 2024Updated last year
- [BMVC 2023] Semantic Adversarial Attacks via Diffusion Models☆25Nov 30, 2023Updated 2 years ago
- AdvDoor: Adversarial Backdoor Attack of Deep Learning System☆32Nov 5, 2024Updated last year
- Code for Friendly Noise against Adversarial Noise: A Powerful Defense against Data Poisoning Attacks (NeurIPS 2022)☆10Jul 20, 2023Updated 2 years ago
- On the Robustness of GUI Grounding Models Against Image Attacks☆12Apr 8, 2025Updated last year
- An Embarrassingly Simple Backdoor Attack on Self-supervised Learning☆20Jan 24, 2024Updated 2 years ago
- Spectrum simulation attack (ECCV'2022 Oral) towards boosting the transferability of adversarial examples☆117Jul 21, 2022Updated 3 years ago
- Deploy to Railway using AI coding agents - Free Credits Offer • AdUse Claude Code, Codex, OpenCode, and more. Autonomous software development now has the infrastructure to match with Railway.
- ☆10Jul 28, 2022Updated 3 years ago
- ReColorAdv and other attacks from the NeurIPS 2019 paper "Functional Adversarial Attacks"☆38May 31, 2022Updated 3 years ago
- [TPAMI 2019] The implementation for "Direction Concentration Learning: Enhancing Congruency in Machine Learning"☆23Jan 30, 2020Updated 6 years ago
- A curated list of papers & resources linked to data poisoning, backdoor attacks and defenses against them (no longer maintained)☆289Jan 11, 2025Updated last year
- PyTorch implementation of BPDA+EOT attack to evaluate adversarial defense with an EBM☆27Jun 30, 2020Updated 5 years ago
- Official implementation of the ICCV2023 paper: Enhancing Generalization of Universal Adversarial Perturbation through Gradient Aggregatio…☆28Aug 17, 2023Updated 2 years ago
- ☆12May 6, 2022Updated 3 years ago