ethz-spylab / robust-style-mimicry
β37Updated 9 months ago
Alternatives and similar repositories for robust-style-mimicry:
Users that are interested in robust-style-mimicry are comparing it to the libraries listed below
- π‘οΈ[ICLR'2024] Toward effective protection against diffusion-based mimicry through score distillation, a.k.a SDS-Attackβ44Updated last year
- Code of paper [CVPR'24: Can Protective Perturbation Safeguard Personal Data from Being Exploited by Stable Diffusion?]β17Updated last year
- [CVPR 2024] official code for SimACβ18Updated 2 months ago
- β29Updated 2 months ago
- [CVPR'24 Oral] Metacloak: Preventing Unauthorized Subject-driven Text-to-image Diffusion-based Synthesis via Meta-learningβ22Updated 4 months ago
- β25Updated 8 months ago
- β59Updated 2 years ago
- code of paper "IMPRESS: Evaluating the Resilience of Imperceptible Perturbations Against Unauthorized Data Usage in Diffusion-Based Geneβ¦β27Updated 10 months ago
- [ICCV 2023 Oral] Official implementation of "Robust Evaluation of Diffusion-Based Adversarial Purification"β23Updated last year
- β19Updated last year
- The official implementation of ECCV'24 paper "To Generate or Not? Safety-Driven Unlearned Diffusion Models Are Still Easy To Generate Unsβ¦β72Updated last month
- DiffusionGuard: A Robust Defense Against Malicious Diffusion-based Image Editing (ICLR 2025)β18Updated 2 months ago
- PDM-based Purifierβ20Updated 5 months ago
- β19Updated 11 months ago
- Official implementation of NeurIPS'24 paper "Defensive Unlearning with Adversarial Training for Robust Concept Erasure in Diffusion Modelβ¦β39Updated 5 months ago
- A collection of resources on attacks and defenses targeting text-to-image diffusion modelsβ63Updated 2 weeks ago
- β13Updated last month
- Code for the paper "Robustness of AI-Image Detectors: Fundamental Limits and Practical Attacks"β33Updated 9 months ago
- [NeurIPS 2023] Codes for DiffAttack: Evasion Attacks Against Diffusion-Based Adversarial Purificationβ29Updated last year
- Certified robustness "for free" using off-the-shelf diffusion models and classifiersβ40Updated last year
- This code is the official implementation of WEvade.β38Updated last year
- Official repo to reproduce the paper "How to Backdoor Diffusion Models?" published at CVPR 2023β87Updated 7 months ago
- Auto1111 port of NVlab's adversarial purification method that uses the forward and reverse processes of diffusion models to remove adversβ¦β13Updated last year
- List of T2I safety papers, updated daily, welcome to discuss using Discussionsβ60Updated 8 months ago
- Official Implementation of Safe Latent Diffusion for Text2Imageβ85Updated last year
- Official repo for Detecting, Explaining, and Mitigating Memorization in Diffusion Models (ICLR 2024)β70Updated last year
- β29Updated 10 months ago
- Code Repo for the NeurIPS 2023 paper "VillanDiffusion: A Unified Backdoor Attack Framework for Diffusion Models"β23Updated 7 months ago
- [MM'23 Oral] "Text-to-image diffusion models can be easily backdoored through multimodal data poisoning"β28Updated last month
- [NeurIPS 2023] Differentially Private Image Classification by Learning Priors from Random Processesβ12Updated last year