kaiwenzha / contrastive-poisoningView external linksLinks
[ICLR 2023, Spotlight] Indiscriminate Poisoning Attacks on Unsupervised Contrastive Learning
☆33Dec 2, 2023Updated 2 years ago
Alternatives and similar repositories for contrastive-poisoning
Users that are interested in contrastive-poisoning are comparing it to the libraries listed below
Sorting:
- Code for Transferable Unlearnable Examples☆22Mar 11, 2023Updated 2 years ago
- Unlearnable Examples Give a False Sense of Security: Piercing through Unexploitable Data with Learnable Examples☆11Oct 14, 2024Updated last year
- [ICLR 2022] Official repository for "Robust Unlearnable Examples: Protecting Data Against Adversarial Learning"☆48Jul 20, 2024Updated last year
- PyTorch implementation of our ICLR 2023 paper titled "Is Adversarial Training Really a Silver Bullet for Mitigating Data Poisoning?".☆12Mar 13, 2023Updated 2 years ago
- One-Pixel Shortcut: on the Learning Preference of Deep Neural Networks (ICLR 2023 Spotlight)☆14Sep 28, 2025Updated 4 months ago
- [NeurIPS 2022] "Randomized Channel Shuffling: Minimal-Overhead Backdoor Attack Detection without Clean Datasets" by Ruisi Cai*, Zhenyu Zh…☆21Oct 1, 2022Updated 3 years ago
- Boosting the Transferability of Adversarial Attacks with Reverse Adversarial Perturbation (NeurIPS 2022)☆33Dec 16, 2022Updated 3 years ago
- Image Shortcut Squeezing: Countering Perturbative Availability Poisons with Compression☆14Mar 22, 2025Updated 10 months ago
- APBench: A Unified Availability Poisoning Attack and Defenses Benchmark (TMLR 08/2024)☆46Apr 15, 2025Updated 10 months ago
- [ICLR2021] Unlearnable Examples: Making Personal Data Unexploitable☆169Jul 5, 2024Updated last year
- Repository for the Paper: Refusing Safe Prompts for Multi-modal Large Language Models☆18Oct 16, 2024Updated last year
- [Machine Learning 2023] Imbalanced Gradients: A Subtle Cause of Overestimated Adversarial Robustness☆17Jul 5, 2024Updated last year
- ☆11Dec 23, 2024Updated last year
- [ICCV 2023] HybridAugment++: Unified Frequency Spectra Perturbations for Model Robustness☆17Sep 28, 2023Updated 2 years ago
- ☆54Sep 11, 2021Updated 4 years ago
- Code for our ICLR 2023 paper Making Substitute Models More Bayesian Can Enhance Transferability of Adversarial Examples.☆18May 31, 2023Updated 2 years ago
- Code of paper [CVPR'24: Can Protective Perturbation Safeguard Personal Data from Being Exploited by Stable Diffusion?]☆23Apr 2, 2024Updated last year
- Simple yet effective targeted transferable attack (NeurIPS 2021)☆51Nov 17, 2022Updated 3 years ago
- Code for the paper "Autoregressive Perturbations for Data Poisoning" (NeurIPS 2022)☆20Sep 9, 2024Updated last year
- [NeurIPS 2021] Better Safe Than Sorry: Preventing Delusive Adversaries with Adversarial Training☆32Jan 9, 2022Updated 4 years ago
- Official repository for "On Generating Transferable Targeted Perturbations" (ICCV 2021)☆62Mar 25, 2023Updated 2 years ago
- ☆22Sep 13, 2021Updated 4 years ago
- MATLAB Simulink Based Fault Tree Analyzer☆10Mar 12, 2024Updated last year
- Experimental toolbox for quantum Shapley values.☆10Jan 2, 2024Updated 2 years ago
- [NeurIPS 23] Characterizing OOD Error via Optimal Transport☆13Nov 19, 2023Updated 2 years ago
- Code for ICLR 2025 Failures to Find Transferable Image Jailbreaks Between Vision-Language Models☆37Jun 1, 2025Updated 8 months ago
- ☆10Jul 28, 2022Updated 3 years ago
- ☆13Mar 14, 2022Updated 3 years ago
- Code for Adversarial Example Games NeurIPS 2020 Paper☆27Nov 27, 2024Updated last year
- ☆27Nov 9, 2022Updated 3 years ago
- A repository for the query-efficient black-box attack, SignHunter☆23Jan 15, 2020Updated 6 years ago
- Collection of scripts for preparation of datasets for semantic segmentation of UAV images☆14Jun 21, 2022Updated 3 years ago
- Surrogate Model Extension (SME): A Fast and Accurate Weight Update Attack on Federated Learning [Accepted at ICML 2023]☆14Mar 31, 2024Updated last year
- Official code for "TWINS: A Fine-Tuning Framework for Improved Transferability of Adversarial Robustness and Generalization", CVPR 2023☆13Apr 26, 2023Updated 2 years ago
- Code repository for the paper --- [USENIX Security 2023] Towards A Proactive ML Approach for Detecting Backdoor Poison Samples☆30Jul 11, 2023Updated 2 years ago
- This is the official code repository for paper "Quantization Aware Attack: Enhancing Transferable Adversarial Attacks by Model Quantizati…☆14Sep 21, 2025Updated 4 months ago
- ☆14Oct 7, 2022Updated 3 years ago
- Official Implementation of NIPS 2022 paper Pre-activation Distributions Expose Backdoor Neurons☆15Jan 13, 2023Updated 3 years ago
- Unrestricted adversarial images via interpretable color transformations (TIFS 2023 & BMVC 2020)☆32Apr 25, 2023Updated 2 years ago