Huang-yihao / Personalization-based_backdoorLinks
☆10Updated 10 months ago
Alternatives and similar repositories for Personalization-based_backdoor
Users that are interested in Personalization-based_backdoor are comparing it to the libraries listed below
Sorting:
- All code and data necessary to replicate experiments in the paper BAGM: A Backdoor Attack for Manipulating Text-to-Image Generative Model…☆11Updated last year
- Code for the paper "Autoregressive Perturbations for Data Poisoning" (NeurIPS 2022)☆20Updated last year
- Implementation of BadCLIP https://arxiv.org/pdf/2311.16194.pdf☆21Updated last year
- ☆18Updated 3 years ago
- ☆21Updated last year
- ☆18Updated 2 years ago
- [AAAI 2024] Data-Free Hard-Label Robustness Stealing Attack☆15Updated last year
- Github repo for One-shot Neural Backdoor Erasing via Adversarial Weight Masking (NeurIPS 2022)☆15Updated 2 years ago
- ☆16Updated 3 years ago
- [CVPR 2023] The official implementation of our CVPR 2023 paper "Detecting Backdoors During the Inference Stage Based on Corruption Robust…☆23Updated 2 years ago
- [MM'23 Oral] "Text-to-image diffusion models can be easily backdoored through multimodal data poisoning"☆32Updated 2 months ago
- Boosting the Transferability of Adversarial Attacks with Reverse Adversarial Perturbation (NeurIPS 2022)☆33Updated 2 years ago
- Adversarial Attacks against Closed-Source MLLMs via Feature Optimal Alignment☆28Updated last week
- Code Repo for the NeurIPS 2023 paper "VillanDiffusion: A Unified Backdoor Attack Framework for Diffusion Models"☆27Updated last month
- [CVPR23W] "A Pilot Study of Query-Free Adversarial Attack against Stable Diffusion" by Haomin Zhuang, Yihua Zhang and Sijia Liu☆26Updated last year
- Official implementation of the ICCV2023 paper: Enhancing Generalization of Universal Adversarial Perturbation through Gradient Aggregatio…☆27Updated 2 years ago
- ☆16Updated 3 years ago
- The code for ACM MM2024 (Multimodal Unlearnable Examples: Protecting Data against Multimodal Contrastive Learning)☆13Updated last year
- ☆11Updated 2 years ago
- [CVPR 2023] Backdoor Defense via Adaptively Splitting Poisoned Dataset☆48Updated last year
- ☆12Updated 9 months ago
- ICCV 2021, We find most existing triggers of backdoor attacks in deep learning contain severe artifacts in the frequency domain. This Rep…☆45Updated 3 years ago
- ☆13Updated last year
- SEAT☆21Updated 2 years ago
- Divide-and-Conquer Attack: Harnessing the Power of LLM to Bypass the Censorship of Text-to-Image Generation Mode☆18Updated 8 months ago
- ☆21Updated last year
- Official Implementation for "Towards Reliable Verification of Unauthorized Data Usage in Personalized Text-to-Image Diffusion Models" (IE…☆22Updated 6 months ago
- Code for identifying natural backdoors in existing image datasets.☆15Updated 3 years ago
- [MM '24] EvilEdit: Backdooring Text-to-Image Diffusion Models in One Second☆25Updated 11 months ago
- ☆19Updated 3 years ago