dayu11 / Availability-Attacks-Create-ShortcutsView external linksLinks
☆10Jul 28, 2022Updated 3 years ago
Alternatives and similar repositories for Availability-Attacks-Create-Shortcuts
Users that are interested in Availability-Attacks-Create-Shortcuts are comparing it to the libraries listed below
Sorting:
- Code for the paper "Autoregressive Perturbations for Data Poisoning" (NeurIPS 2022)☆20Sep 9, 2024Updated last year
- Unlearnable Examples Give a False Sense of Security: Piercing through Unexploitable Data with Learnable Examples☆11Oct 14, 2024Updated last year
- Image Shortcut Squeezing: Countering Perturbative Availability Poisons with Compression☆14Mar 22, 2025Updated 10 months ago
- Code for Transferable Unlearnable Examples☆22Mar 11, 2023Updated 2 years ago
- ☆19Jun 5, 2023Updated 2 years ago
- CVPR2023: Unlearnable Clusters: Towards Label-agnostic Unlearnable Examples☆22Apr 25, 2023Updated 2 years ago
- ☆28Jun 17, 2024Updated last year
- [CVPR 2023] The official implementation of our CVPR 2023 paper "Detecting Backdoors During the Inference Stage Based on Corruption Robust…☆24May 25, 2023Updated 2 years ago
- [CVPR'24 Oral] Metacloak: Preventing Unauthorized Subject-driven Text-to-image Diffusion-based Synthesis via Meta-learning☆28Nov 19, 2024Updated last year
- Backdoor Safety Tuning (NeurIPS 2023 & 2024 Spotlight)☆27Nov 18, 2024Updated last year
- [ECCV'24] UNIT: Backdoor Mitigation via Automated Neural Distribution Tightening☆10Dec 18, 2025Updated last month
- Backdoor Cleansing with Unlabeled Data (CVPR 2023)☆12Apr 6, 2023Updated 2 years ago
- Codes for the ICLR 2022 paper: Trigger Hunting with a Topological Prior for Trojan Detection☆11Sep 19, 2023Updated 2 years ago
- ☆15Jun 4, 2024Updated last year
- ☆15Apr 7, 2023Updated 2 years ago
- [CVPR'24] LOTUS: Evasive and Resilient Backdoor Attacks through Sub-Partitioning☆15Jan 15, 2025Updated last year
- A curated list of awesome Unlearnable Example papers resources.☆14Dec 14, 2025Updated last month
- PyTorch implementation of our ICLR 2023 paper titled "Is Adversarial Training Really a Silver Bullet for Mitigating Data Poisoning?".☆12Mar 13, 2023Updated 2 years ago
- Github repo for One-shot Neural Backdoor Erasing via Adversarial Weight Masking (NeurIPS 2022)☆15Jan 3, 2023Updated 3 years ago
- ☆18Oct 7, 2022Updated 3 years ago
- This is an implementation demo of the IJCAI 2022 paper [Eliminating Backdoor Triggers for Deep Neural Networks Using Attention Relation …☆21Nov 9, 2024Updated last year
- Camouflage poisoning via machine unlearning☆19Jul 3, 2025Updated 7 months ago
- [ICLR 2022] Official repository for "Robust Unlearnable Examples: Protecting Data Against Adversarial Learning"☆48Jul 20, 2024Updated last year
- ☆20Feb 17, 2020Updated 5 years ago
- [NeurIPS 2021] Better Safe Than Sorry: Preventing Delusive Adversaries with Adversarial Training☆32Jan 9, 2022Updated 4 years ago
- A toolbox for backdoor attacks.☆23Jan 13, 2023Updated 3 years ago
- ☆54Sep 11, 2021Updated 4 years ago
- ☆25Aug 18, 2023Updated 2 years ago
- Code for paper "Universal Jailbreak Backdoors from Poisoned Human Feedback"☆66Apr 24, 2024Updated last year
- This is the official implementation of our paper 'Black-box Dataset Ownership Verification via Backdoor Watermarking'.☆26Jul 22, 2023Updated 2 years ago
- PyTorch implementation of BPDA+EOT attack to evaluate adversarial defense with an EBM☆26Jun 30, 2020Updated 5 years ago
- [ICLR2021] Unlearnable Examples: Making Personal Data Unexploitable☆169Jul 5, 2024Updated last year
- Code for "Label-Consistent Backdoor Attacks"☆57Nov 22, 2020Updated 5 years ago
- Backdoor Stuff in AI/ ML domain☆34Updated this week
- Official repository of the paper: Marking Code Without Breaking It: Code Watermarking for Detecting LLM-Generated Code☆12Oct 7, 2025Updated 4 months ago
- [ICLR'21] Dataset Inference for Ownership Resolution in Machine Learning☆32Oct 10, 2022Updated 3 years ago
- ☆33Nov 27, 2023Updated 2 years ago
- Code repository for the paper --- [USENIX Security 2023] Towards A Proactive ML Approach for Detecting Backdoor Poison Samples☆30Jul 11, 2023Updated 2 years ago
- [ICLR 2023, Spotlight] Indiscriminate Poisoning Attacks on Unsupervised Contrastive Learning☆33Dec 2, 2023Updated 2 years ago