OPTML-Group / Diffusion-MU-Attack
The official implementation of ECCV'24 paper "To Generate or Not? Safety-Driven Unlearned Diffusion Models Are Still Easy To Generate Unsafe Images ... For Now". This work introduces one fast and effective attack method to evaluate the harmful-content generation ability of safety-driven unlearned diffusion models.
☆72Updated last month
Alternatives and similar repositories for Diffusion-MU-Attack:
Users that are interested in Diffusion-MU-Attack are comparing it to the libraries listed below
- ☆29Updated 2 months ago
- Official implementation of NeurIPS'24 paper "Defensive Unlearning with Adversarial Training for Robust Concept Erasure in Diffusion Model…☆39Updated 5 months ago
- ☆31Updated 8 months ago
- A collection of resources on attacks and defenses targeting text-to-image diffusion models☆63Updated 2 weeks ago
- [NeurIPS 2024 D&B Track] UnlearnCanvas: A Stylized Image Dataset to Benchmark Machine Unlearning for Diffusion Models by Yihua Zhang, Cho…☆65Updated 4 months ago
- [ICML 2024] Prompting4Debugging: Red-Teaming Text-to-Image Diffusion Models by Finding Problematic Prompts (Official Pytorch Implementati…☆41Updated 4 months ago
- List of T2I safety papers, updated daily, welcome to discuss using Discussions☆60Updated 7 months ago
- ☆29Updated 10 months ago
- [CVPR'24 Oral] Metacloak: Preventing Unauthorized Subject-driven Text-to-image Diffusion-based Synthesis via Meta-learning☆22Updated 4 months ago
- ☆19Updated last year
- Code for Neurips 2024 paper "Shadowcast: Stealthy Data Poisoning Attacks Against Vision-Language Models"☆44Updated 2 months ago
- Implementation of BadCLIP https://arxiv.org/pdf/2311.16194.pdf☆20Updated last year
- [CVPR 2024] official code for SimAC☆18Updated 2 months ago
- ☆26Updated 4 months ago
- [ICLR24 (Spotlight)] "SalUn: Empowering Machine Unlearning via Gradient-based Weight Saliency in Both Image Classification and Generation…☆117Updated 5 months ago
- ☆11Updated 4 months ago
- ☆62Updated 6 months ago
- ☆25Updated 8 months ago
- [CVPR23W] "A Pilot Study of Query-Free Adversarial Attack against Stable Diffusion" by Haomin Zhuang, Yihua Zhang and Sijia Liu☆26Updated 7 months ago
- ☆24Updated last year
- [ICLR 2024] Inducing High Energy-Latency of Large Vision-Language Models with Verbose Images☆33Updated last year
- [MM'23 Oral] "Text-to-image diffusion models can be easily backdoored through multimodal data poisoning"☆28Updated last month
- ☆59Updated 2 years ago
- [ECCV'24 Oral] The official GitHub page for ''Images are Achilles' Heel of Alignment: Exploiting Visual Vulnerabilities for Jailbreaking …☆28Updated 5 months ago
- [ICLR 2024 Spotlight 🔥 ] - [ Best Paper Award SoCal NLP 2023 🏆] - Jailbreak in pieces: Compositional Adversarial Attacks on Multi-Modal…☆46Updated 10 months ago
- ☆26Updated last year
- Code of paper [CVPR'24: Can Protective Perturbation Safeguard Personal Data from Being Exploited by Stable Diffusion?]☆17Updated last year
- Official implement of paper: Stable Diffusion is Unstable☆22Updated 10 months ago
- 🛡️[ICLR'2024] Toward effective protection against diffusion-based mimicry through score distillation, a.k.a SDS-Attack☆44Updated last year
- ☆20Updated 7 months ago