zhaisf / BadT2IView external linksLinks
[MM'23 Oral] "Text-to-image diffusion models can be easily backdoored through multimodal data poisoning"
☆31Aug 14, 2025Updated 6 months ago
Alternatives and similar repositories for BadT2I
Users that are interested in BadT2I are comparing it to the libraries listed below
Sorting:
- ☆10Dec 18, 2024Updated last year
- [MM '24] EvilEdit: Backdooring Text-to-Image Diffusion Models in One Second☆28Nov 19, 2024Updated last year
- [ICCV 2023] Source code for our paper "Rickrolling the Artist: Injecting Invisible Backdoors into Text-Guided Image Generation Models".☆65Nov 20, 2023Updated 2 years ago
- [NeurIPS 2024] "Membership Inference on Text-to-image Diffusion Models via Conditional Likelihood Discrepancy"☆12Sep 15, 2025Updated 4 months ago
- All code and data necessary to replicate experiments in the paper BAGM: A Backdoor Attack for Manipulating Text-to-Image Generative Model…☆13Sep 16, 2024Updated last year
- ☆13May 1, 2024Updated last year
- Code Repo for the NeurIPS 2023 paper "VillanDiffusion: A Unified Backdoor Attack Framework for Diffusion Models"☆27Sep 18, 2025Updated 4 months ago
- The official implementation of the paper "Free Fine-tuning: A Plug-and-Play Watermarking Scheme for Deep Neural Networks".☆19Apr 19, 2024Updated last year
- [ECCV'24] T2IShield: Defending Against Backdoors on Text-to-Image Diffusion Models☆17Dec 21, 2025Updated last month
- Github repo for One-shot Neural Backdoor Erasing via Adversarial Weight Masking (NeurIPS 2022)☆15Jan 3, 2023Updated 3 years ago
- Submission Guide + Discussion Board for AI Singapore Global Challenge for Safe and Secure LLMs (Track 1A).☆16Jul 4, 2024Updated last year
- [CVPR 2024] Not All Prompts Are Secure: A Switchable Backdoor Attack Against Pre-trained Vision Transfomers☆16Oct 24, 2024Updated last year
- [NeurIPS 2025 D&B] BackdoorDM: A Comprehensive Benchmark for Backdoor Learning in Diffusion Model☆24Aug 1, 2025Updated 6 months ago
- A Watermark-Conditioned Diffusion Model for IP Protection (ECCV 2024)☆34Apr 5, 2025Updated 10 months ago
- ☆12May 6, 2022Updated 3 years ago
- Implementation of BadCLIP https://arxiv.org/pdf/2311.16194.pdf☆23Mar 23, 2024Updated last year
- ☆22Apr 23, 2024Updated last year
- The implementation of our IEEE S&P 2024 paper "Securely Fine-tuning Pre-trained Encoders Against Adversarial Examples".☆11Jun 28, 2024Updated last year
- ☆14Feb 26, 2025Updated 11 months ago
- Code for the CVPR '23 paper, "Defending Against Patch-based Backdoor Attacks on Self-Supervised Learning"☆10Jun 9, 2023Updated 2 years ago
- [NeurIPS 2024] Fight Back Against Jailbreaking via Prompt Adversarial Tuning☆10Oct 29, 2024Updated last year
- ☆12Mar 5, 2024Updated last year
- Proof-of-concept implementation for the paper "ThermalScope: A Practical Interrupt Side Channel Attack Based On Thermal Event Interrupts"…☆13Dec 17, 2024Updated last year
- The official implementation of ECCV'24 paper "To Generate or Not? Safety-Driven Unlearned Diffusion Models Are Still Easy To Generate Uns…☆86Feb 28, 2025Updated 11 months ago
- [CVPR2025] We present SleeperMark, a novel framework designed to embed resilient watermarks into T2I diffusion models☆37May 26, 2025Updated 8 months ago
- this is for the ACM MM paper---Backdoor Attack on Crowd Counting☆17Jul 10, 2022Updated 3 years ago
- ☆14Jan 4, 2025Updated last year
- Implement of Implicit Knowledge Extraction Attack.☆18May 28, 2025Updated 8 months ago
- ☆13Jun 1, 2024Updated last year
- [EMNLP 2022] Distillation-Resistant Watermarking (DRW) for Model Protection in NLP☆13Aug 17, 2023Updated 2 years ago
- Source code for ECCV 2022 Poster: Data-free Backdoor Removal based on Channel Lipschitzness☆35Jan 9, 2023Updated 3 years ago
- Official Implementation of NIPS 2022 paper Pre-activation Distributions Expose Backdoor Neurons☆15Jan 13, 2023Updated 3 years ago
- ☆14Oct 7, 2022Updated 3 years ago
- [ICLR 2024] "Data Distillation Can Be Like Vodka: Distilling More Times For Better Quality" by Xuxi Chen*, Yu Yang*, Zhangyang Wang, Baha…☆15May 18, 2024Updated last year
- 🛡️[ICLR'2024] Toward effective protection against diffusion-based mimicry through score distillation, a.k.a SDS-Attack☆59Apr 7, 2024Updated last year
- Official repo to reproduce the paper "How to Backdoor Diffusion Models?" published at CVPR 2023☆96Sep 17, 2025Updated 4 months ago
- PyTorch implementation of our ICLR 2023 paper titled "Is Adversarial Training Really a Silver Bullet for Mitigating Data Poisoning?".☆12Mar 13, 2023Updated 2 years ago
- ☆20Oct 28, 2025Updated 3 months ago
- Code for NeurIPS 2024 Paper "Fight Back Against Jailbreaking via Prompt Adversarial Tuning"☆22May 6, 2025Updated 9 months ago