zqypku / mm_poison
☆20Updated last year
Alternatives and similar repositories for mm_poison:
Users that are interested in mm_poison are comparing it to the libraries listed below
- [CVPR 2023] The official implementation of our CVPR 2023 paper "Detecting Backdoors During the Inference Stage Based on Corruption Robust…☆21Updated last year
- Code Repo for the NeurIPS 2023 paper "VillanDiffusion: A Unified Backdoor Attack Framework for Diffusion Models"☆22Updated 5 months ago
- Official Code for ART: Automatic Red-teaming for Text-to-Image Models to Protect Benign Users (NeurIPS 2024)☆13Updated 3 months ago
- Implementation of BadCLIP https://arxiv.org/pdf/2311.16194.pdf☆18Updated 10 months ago
- [CVPR23W] "A Pilot Study of Query-Free Adversarial Attack against Stable Diffusion" by Haomin Zhuang, Yihua Zhang and Sijia Liu☆26Updated 5 months ago
- Backdoor Safety Tuning (NeurIPS 2023 & 2024 Spotlight)☆25Updated 2 months ago
- [CVPR 2023] Backdoor Defense via Adaptively Splitting Poisoned Dataset☆46Updated 10 months ago
- Code for paper: "PromptCARE: Prompt Copyright Protection by Watermark Injection and Verification", IEEE S&P 2024.☆30Updated 6 months ago
- [MM'23 Oral] "Text-to-image diffusion models can be easily backdoored through multimodal data poisoning"☆26Updated last month
- ☆18Updated 9 months ago
- [CCS'22] SSLGuard: A Watermarking Scheme for Self-supervised Learning Pre-trained Encoders☆18Updated 2 years ago
- ☆29Updated 7 months ago
- [ICML 2023] Protecting Language Generation Models via Invisible Watermarking☆13Updated last year
- ☆53Updated last year
- ☆24Updated 6 months ago
- Boosting the Transferability of Adversarial Attacks with Reverse Adversarial Perturbation (NeurIPS 2022)☆33Updated 2 years ago
- Backdooring Multimodal Learning☆23Updated last year
- Code for the paper "Autoregressive Perturbations for Data Poisoning" (NeurIPS 2022)☆19Updated 5 months ago
- Github repo for One-shot Neural Backdoor Erasing via Adversarial Weight Masking (NeurIPS 2022)☆14Updated 2 years ago
- ☆41Updated last year
- [NeurIPS 2023] Codes for DiffAttack: Evasion Attacks Against Diffusion-Based Adversarial Purification☆27Updated 11 months ago
- [BMVC 2023] Backdoor Attack on Hash-based Image Retrieval via Clean-label Data Poisoning☆15Updated last year
- [ICLR2023] Distilling Cognitive Backdoor Patterns within an Image☆32Updated 3 months ago
- ☆31Updated 2 years ago
- [ICML 2023] Are Diffusion Models Vulnerable to Membership Inference Attacks?☆32Updated 5 months ago
- CVPR2023: Unlearnable Clusters: Towards Label-agnostic Unlearnable Examples☆21Updated last year
- Codes for NeurIPS 2021 paper "Adversarial Neuron Pruning Purifies Backdoored Deep Models"☆57Updated last year
- PyTorch implementation for 'Black-box Backdoor Defense via Zero-shot Image Purification'☆11Updated last year