OPTML-Group / AdvUnlearnLinks
Official implementation of NeurIPS'24 paper "Defensive Unlearning with Adversarial Training for Robust Concept Erasure in Diffusion Models". This work adversarially unlearns the text encoder to enhance the robustness of unlearned DMs against adversarial prompt attacks and achieves a better balance between unlearning performance and image generat…
☆49Updated last year
Alternatives and similar repositories for AdvUnlearn
Users that are interested in AdvUnlearn are comparing it to the libraries listed below
Sorting:
- The official implementation of ECCV'24 paper "To Generate or Not? Safety-Driven Unlearned Diffusion Models Are Still Easy To Generate Uns…☆86Updated 11 months ago
- [NeurIPS 2024 D&B Track] UnlearnCanvas: A Stylized Image Dataset to Benchmark Machine Unlearning for Diffusion Models by Yihua Zhang, Cho…☆83Updated last year
- ☆38Updated last year
- [CVPR'24 Oral] Metacloak: Preventing Unauthorized Subject-driven Text-to-image Diffusion-based Synthesis via Meta-learning☆28Updated last year
- ☆23Updated 2 years ago
- [ICLR24 (Spotlight)] "SalUn: Empowering Machine Unlearning via Gradient-based Weight Saliency in Both Image Classification and Generation…☆141Updated 8 months ago
- [ICML 2024] Prompting4Debugging: Red-Teaming Text-to-Image Diffusion Models by Finding Problematic Prompts (Official Pytorch Implementati…☆51Updated 3 weeks ago
- 🛡️[ICLR'2024] Toward effective protection against diffusion-based mimicry through score distillation, a.k.a SDS-Attack☆59Updated last year
- [CVPR 2024] official code for SimAC☆21Updated last year
- A collection of resources on attacks and defenses targeting text-to-image diffusion models☆90Updated last month
- ☆59Updated 3 years ago
- ☆65Updated last year
- [MM '24] EvilEdit: Backdooring Text-to-Image Diffusion Models in One Second☆27Updated last year
- Code of paper [CVPR'24: Can Protective Perturbation Safeguard Personal Data from Being Exploited by Stable Diffusion?]☆23Updated last year
- Implementation of BadCLIP https://arxiv.org/pdf/2311.16194.pdf☆23Updated last year
- ☆28Updated last year
- List of T2I safety papers, updated daily, welcome to discuss using Discussions☆67Updated last year
- [CVPR'25]Chain of Attack: On the Robustness of Vision-Language Models Against Transfer-Based Adversarial Attacks☆29Updated 7 months ago
- [SatML 2024] Shake to Leak: Fine-tuning Diffusion Models Can Amplify the Generative Privacy Risk☆16Updated 10 months ago
- PDM-based Purifier☆22Updated last year
- [ECCV-2024] Transferable Targeted Adversarial Attack, CLIP models, Generative adversarial network, Multi-target attacks☆38Updated 9 months ago
- Official implement of paper: Stable Diffusion is Unstable☆23Updated last year
- ☆35Updated last year
- ☆47Updated last year
- ☆28Updated 2 years ago
- ☆40Updated 2 years ago
- [CVPR23W] "A Pilot Study of Query-Free Adversarial Attack against Stable Diffusion" by Haomin Zhuang, Yihua Zhang and Sijia Liu☆26Updated last year
- Official repo for An Efficient Membership Inference Attack for the Diffusion Model by Proximal Initialization☆16Updated last year
- Code for the paper "Robustness of AI-Image Detectors: Fundamental Limits and Practical Attacks"☆39Updated last year
- Official repository for Targeted Unlearning with Single Layer Unlearning Gradient (SLUG), ICML 2025☆14Updated 5 months ago