Huang-yihao / Personalization-based_backdoorLinks
☆10Updated 6 months ago
Alternatives and similar repositories for Personalization-based_backdoor
Users that are interested in Personalization-based_backdoor are comparing it to the libraries listed below
Sorting:
- All code and data necessary to replicate experiments in the paper BAGM: A Backdoor Attack for Manipulating Text-to-Image Generative Model…☆11Updated 10 months ago
- [AAAI 2024] Data-Free Hard-Label Robustness Stealing Attack☆14Updated last year
- Github repo for One-shot Neural Backdoor Erasing via Adversarial Weight Masking (NeurIPS 2022)☆15Updated 2 years ago
- ☆18Updated last year
- [MM'23 Oral] "Text-to-image diffusion models can be easily backdoored through multimodal data poisoning"☆29Updated 4 months ago
- ☆11Updated 6 months ago
- Code for the paper "Autoregressive Perturbations for Data Poisoning" (NeurIPS 2022)☆20Updated 10 months ago
- Boosting the Transferability of Adversarial Attacks with Reverse Adversarial Perturbation (NeurIPS 2022)☆33Updated 2 years ago
- Implementation of BadCLIP https://arxiv.org/pdf/2311.16194.pdf☆21Updated last year
- Official implementation of the ICCV2023 paper: Enhancing Generalization of Universal Adversarial Perturbation through Gradient Aggregatio…☆26Updated last year
- [CVPR 2023] The official implementation of our CVPR 2023 paper "Detecting Backdoors During the Inference Stage Based on Corruption Robust…☆23Updated 2 years ago
- ☆11Updated 2 years ago
- ☆16Updated 3 years ago
- Official Implementation for "Towards Reliable Verification of Unauthorized Data Usage in Personalized Text-to-Image Diffusion Models" (IE…☆20Updated 3 months ago
- ☆18Updated 2 years ago
- [ICML 2023] Protecting Language Generation Models via Invisible Watermarking☆14Updated last year
- ☆20Updated last year
- [BMVC 2023] Backdoor Attack on Hash-based Image Retrieval via Clean-label Data Poisoning☆15Updated last year
- Code for identifying natural backdoors in existing image datasets.☆15Updated 2 years ago
- [CVPR23W] "A Pilot Study of Query-Free Adversarial Attack against Stable Diffusion" by Haomin Zhuang, Yihua Zhang and Sijia Liu☆26Updated 10 months ago
- Code for Transferable Unlearnable Examples☆20Updated 2 years ago
- ☆44Updated last year
- Backdoor Safety Tuning (NeurIPS 2023 & 2024 Spotlight)☆26Updated 7 months ago
- ☆16Updated 3 years ago
- Code for paper: PoisonPrompt: Backdoor Attack on Prompt-based Large Language Models, IEEE ICASSP 2024. Demo//124.220.228.133:11107☆17Updated 11 months ago
- Code Repo for the NeurIPS 2023 paper "VillanDiffusion: A Unified Backdoor Attack Framework for Diffusion Models"☆25Updated 2 months ago
- ☆13Updated last year
- ☆20Updated last year
- ☆60Updated last year
- Reconstructive Neuron Pruning for Backdoor Defense (ICML 2023)☆38Updated last year