The official implementation of ECCV'24 paper "To Generate or Not? Safety-Driven Unlearned Diffusion Models Are Still Easy To Generate Unsafe Images ... For Now". This work introduces one fast and effective attack method to evaluate the harmful-content generation ability of safety-driven unlearned diffusion models.
☆87Feb 28, 2025Updated last year
Alternatives and similar repositories for Diffusion-MU-Attack
Users that are interested in Diffusion-MU-Attack are comparing it to the libraries listed below
Sorting:
- [ICML 2024] Prompting4Debugging: Red-Teaming Text-to-Image Diffusion Models by Finding Problematic Prompts (Official Pytorch Implementati…☆52Jan 11, 2026Updated 2 months ago
- ☆39Jan 15, 2025Updated last year
- Official implementation of NeurIPS'24 paper "Defensive Unlearning with Adversarial Training for Robust Concept Erasure in Diffusion Model…☆49Nov 4, 2024Updated last year
- ☆48Jul 14, 2024Updated last year
- [ECCV 2024] "Receler: Reliable Concept Erasing of Text-to-Image Diffusion Models via Lightweight Erasers" (Official Implementation)☆44Mar 2, 2025Updated last year
- ☆35May 22, 2024Updated last year
- [ICLR24 (Spotlight)] "SalUn: Empowering Machine Unlearning via Gradient-based Weight Saliency in Both Image Classification and Generation…☆143Feb 28, 2026Updated 3 weeks ago
- ☆13Jan 14, 2026Updated 2 months ago
- ☆23Feb 5, 2026Updated last month
- [NeurIPS 2024 D&B Track] UnlearnCanvas: A Stylized Image Dataset to Benchmark Machine Unlearning for Diffusion Models by Yihua Zhang, Cho…☆82Nov 11, 2024Updated last year
- Unified Concept Editing in Diffusion Models☆184Dec 7, 2025Updated 3 months ago
- [CVPR2024] MMA-Diffusion: MultiModal Attack on Diffusion Models☆386Jan 8, 2026Updated 2 months ago
- Official Implementation of Safe Latent Diffusion for Text2Image☆95Apr 21, 2023Updated 2 years ago
- ☆42Jun 1, 2023Updated 2 years ago
- ☆197Apr 7, 2025Updated 11 months ago
- Code for the paper - ConceptPrune: Concept Editing in Diffusion Models via Skilled Neuron Pruning☆22Aug 13, 2024Updated last year
- Official implementation of paper "One-dimensional Adapter to Rule Them All: Concepts, Diffusion Models and Erasing Applications".☆152Dec 28, 2023Updated 2 years ago
- Text file containing NSFW words aggregated from various sources.☆10Aug 23, 2020Updated 5 years ago
- Erasing Concepts from Diffusion Models☆657Aug 18, 2025Updated 7 months ago
- [CVPR 2024] "MACE: Mass Concept Erasure in Diffusion Models" (Official Implementation)☆394Jun 2, 2025Updated 9 months ago
- List of T2I safety papers, updated daily, welcome to discuss using Discussions☆68Aug 12, 2024Updated last year
- [CVPR 2024] Not All Prompts Are Secure: A Switchable Backdoor Attack Against Pre-trained Vision Transfomers☆16Oct 24, 2024Updated last year
- [CVPR 2025] Six-CD: Benchmarking Concept Removals for Benign Text-to-image Diffusion Models☆16Jan 8, 2026Updated 2 months ago
- Official repository for "On the Multi-modal Vulnerability of Diffusion Models"☆16Jul 15, 2024Updated last year
- Forget-Me-Not: Learning to Forget in Text-to-Image Diffusion Models, 2023☆139Oct 22, 2025Updated 5 months ago
- [BMVC2024] Erasing Concepts from Text-to-Image Diffusion Models with Few-shot Unlearning☆14Feb 14, 2026Updated last month
- Divide-and-Conquer Attack: Harnessing the Power of LLM to Bypass the Censorship of Text-to-Image Generation Mode☆18Feb 16, 2025Updated last year
- [MM'23 Oral] "Text-to-image diffusion models can be easily backdoored through multimodal data poisoning"☆31Aug 14, 2025Updated 7 months ago
- Official Implementation for "Editing Massive Concepts in Text-to-Image Diffusion Models"☆19Mar 21, 2024Updated 2 years ago
- [ECCV 2024] Official PyTorch Implementation of "How Many Unicorns Are in This Image? A Safety Evaluation Benchmark for Vision LLMs"☆87Nov 28, 2023Updated 2 years ago
- ☆16Feb 23, 2025Updated last year
- ☆28Aug 7, 2024Updated last year
- [ICLR 2025] Dissecting adversarial robustness of multimodal language model agents☆134Feb 19, 2025Updated last year
- [NeurIPS-2023] Annual Conference on Neural Information Processing Systems☆228Dec 22, 2024Updated last year
- ☆109Feb 16, 2024Updated 2 years ago
- ☆17Feb 17, 2024Updated 2 years ago
- [NeurIPS23 (Spotlight)] "Model Sparsity Can Simplify Machine Unlearning" by Jinghan Jia*, Jiancheng Liu*, Parikshit Ram, Yuguang Yao, Gao…☆84Feb 28, 2026Updated 3 weeks ago
- [CCS'24] SafeGen: Mitigating Unsafe Content Generation in Text-to-Image Models☆138Jul 1, 2025Updated 8 months ago
- [ICML 2025] Unlearning in Diffusion Models using Sparse Autoencoders☆54Oct 16, 2025Updated 5 months ago