psandovalsegura / autoregressive-poisoningView external linksLinks
Code for the paper "Autoregressive Perturbations for Data Poisoning" (NeurIPS 2022)
☆20Sep 9, 2024Updated last year
Alternatives and similar repositories for autoregressive-poisoning
Users that are interested in autoregressive-poisoning are comparing it to the libraries listed below
Sorting:
- ☆10Jul 28, 2022Updated 3 years ago
- Image Shortcut Squeezing: Countering Perturbative Availability Poisons with Compression☆14Mar 22, 2025Updated 10 months ago
- ☆19Jun 5, 2023Updated 2 years ago
- Code for Transferable Unlearnable Examples☆22Mar 11, 2023Updated 2 years ago
- [ICLR 2022] Official repository for "Robust Unlearnable Examples: Protecting Data Against Adversarial Learning"☆48Jul 20, 2024Updated last year
- PyTorch implementation of our ICLR 2023 paper titled "Is Adversarial Training Really a Silver Bullet for Mitigating Data Poisoning?".☆12Mar 13, 2023Updated 2 years ago
- ☆54Sep 11, 2021Updated 4 years ago
- A simple and efficient baseline for data attribution☆11Nov 10, 2023Updated 2 years ago
- [NeurIPS 2021] Better Safe Than Sorry: Preventing Delusive Adversaries with Adversarial Training☆32Jan 9, 2022Updated 4 years ago
- Code for Friendly Noise against Adversarial Noise: A Powerful Defense against Data Poisoning Attacks (NeurIPS 2022)☆10Jul 20, 2023Updated 2 years ago
- Unlearnable Examples Give a False Sense of Security: Piercing through Unexploitable Data with Learnable Examples☆11Oct 14, 2024Updated last year
- [ICLR2021] Unlearnable Examples: Making Personal Data Unexploitable☆169Jul 5, 2024Updated last year
- CVPR2023: Unlearnable Clusters: Towards Label-agnostic Unlearnable Examples☆22Apr 25, 2023Updated 2 years ago
- APBench: A Unified Availability Poisoning Attack and Defenses Benchmark (TMLR 08/2024)☆46Apr 15, 2025Updated 10 months ago
- This is the official code implementation of A Survey on Unlearnable Data.☆25Apr 4, 2025Updated 10 months ago
- This is the code repo of our Pattern Recognition journal on IPR protection of Image Captioning Models☆11Aug 29, 2023Updated 2 years ago
- This work corroborates a run-time Trojan detection method exploiting STRong Intentional Perturbation of inputs, is a multi-domain Trojan …☆10Mar 7, 2021Updated 4 years ago
- [CVPR 2023] The official implementation of our CVPR 2023 paper "Detecting Backdoors During the Inference Stage Based on Corruption Robust…☆24May 25, 2023Updated 2 years ago
- Codes for the ICLR 2022 paper: Trigger Hunting with a Topological Prior for Trojan Detection☆11Sep 19, 2023Updated 2 years ago
- Implemention of "Robust Watermarking of Neural Network with Exponential Weighting" in TensorFlow.☆13Dec 2, 2020Updated 5 years ago
- One-Pixel Shortcut: on the Learning Preference of Deep Neural Networks (ICLR 2023 Spotlight)☆14Sep 28, 2025Updated 4 months ago
- The official implementation of CVPR 2025 paper "Invisible Backdoor Attack against Self-supervised Learning"☆17Jul 5, 2025Updated 7 months ago
- Witches' Brew: Industrial Scale Data Poisoning via Gradient Matching☆111Aug 19, 2024Updated last year
- ☆25Nov 14, 2022Updated 3 years ago
- Code repository for the paper --- [USENIX Security 2023] Towards A Proactive ML Approach for Detecting Backdoor Poison Samples☆30Jul 11, 2023Updated 2 years ago
- [ICLR 2023, Spotlight] Indiscriminate Poisoning Attacks on Unsupervised Contrastive Learning☆33Dec 2, 2023Updated 2 years ago
- A curated list of awesome Unlearnable Example papers resources.☆13Dec 14, 2025Updated 2 months ago
- What do we learn from inverting CLIP models?☆58Mar 6, 2024Updated last year
- Bullseye Polytope Clean-Label Poisoning Attack☆15Nov 5, 2020Updated 5 years ago
- Craft poisoned data using MetaPoison☆54Apr 5, 2021Updated 4 years ago
- This is the code of ICLR 2022 Oral paper 'Non-Transferable Learning: A New Approach for Model Ownership Verification and Applicability Au…☆30Oct 22, 2023Updated 2 years ago
- ☆20Oct 28, 2025Updated 3 months ago
- Codes for CVPR2020 paper "Towards Transferable Targeted Attack".☆15Apr 24, 2022Updated 3 years ago
- [CCS-LAMPS'24] LLM IP Protection Against Model Merging☆16Oct 14, 2024Updated last year
- ☆16Dec 3, 2021Updated 4 years ago
- ☆16Jul 17, 2022Updated 3 years ago
- This is the repositoary for our paper published at ICML24.☆11Jun 11, 2025Updated 8 months ago
- Code for the paper "Evading Black-box Classifiers Without Breaking Eggs" [SaTML 2024]☆21Apr 15, 2024Updated last year
- competition☆17Aug 1, 2020Updated 5 years ago