AntigoneRandy / SIRENLinks
Official Implementation for "Towards Reliable Verification of Unauthorized Data Usage in Personalized Text-to-Image Diffusion Models" (IEEE S&P 2025).
☆23Updated 7 months ago
Alternatives and similar repositories for SIREN
Users that are interested in SIREN are comparing it to the libraries listed below
Sorting:
- ☆23Updated last year
- [MM'23 Oral] "Text-to-image diffusion models can be easily backdoored through multimodal data poisoning"☆32Updated 2 months ago
- [AAAI 2024] Data-Free Hard-Label Robustness Stealing Attack☆15Updated last year
- [CVPR 2023] The official implementation of our CVPR 2023 paper "Detecting Backdoors During the Inference Stage Based on Corruption Robust…☆23Updated 2 years ago
- ☆18Updated 2 years ago
- ☆25Updated 2 years ago
- ☆20Updated 3 years ago
- Divide-and-Conquer Attack: Harnessing the Power of LLM to Bypass the Censorship of Text-to-Image Generation Mode☆18Updated 8 months ago
- ☆18Updated 3 years ago
- [CVPR 2023] Backdoor Defense via Adaptively Splitting Poisoned Dataset☆48Updated last year
- Implementation of IEEE TNNLS 2023 and Elsevier PR 2023 papers on backdoor watermarking for deep classification models with unambiguity an…☆19Updated 2 years ago
- CVPR 2025 - Anyattack: Towards Large-scale Self-supervised Adversarial Attacks on Vision-language Models☆55Updated 2 months ago
- ☆44Updated 3 years ago
- ☆41Updated 6 months ago
- ☆27Updated 2 years ago
- ☆79Updated last year
- Implementation of BadCLIP https://arxiv.org/pdf/2311.16194.pdf☆21Updated last year
- ☆27Updated 2 years ago
- A collection of resources on attacks and defenses targeting text-to-image diffusion models☆77Updated 7 months ago
- ☆10Updated 10 months ago
- All code and data necessary to replicate experiments in the paper BAGM: A Backdoor Attack for Manipulating Text-to-Image Generative Model…☆12Updated last year
- [CVPR23W] "A Pilot Study of Query-Free Adversarial Attack against Stable Diffusion" by Haomin Zhuang, Yihua Zhang and Sijia Liu☆26Updated last year
- [NDSS 2025] Official code for our paper "Explanation as a Watermark: Towards Harmless and Multi-bit Model Ownership Verification via Wate…☆44Updated 11 months ago
- Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks☆17Updated 6 years ago
- ☆18Updated last year
- [AAAI'21] Deep Feature Space Trojan Attack of Neural Networks by Controlled Detoxification☆29Updated 10 months ago
- Distribution Preserving Backdoor Attack in Self-supervised Learning☆18Updated last year
- [CVPR 2024] "Data Poisoning based Backdoor Attacks to Contrastive Learning": official code implementation.☆16Updated 8 months ago
- Text-CRS: A Generalized Certified Robustness Framework against Textual Adversarial Attacks (IEEE S&P 2024)☆34Updated 4 months ago
- Invisible Backdoor Attack with Sample-Specific Triggers☆102Updated 3 years ago