Watermark you artworks to stay away from unauthorized diffusion style mimicry!
β357May 30, 2025Updated 9 months ago
Alternatives and similar repositories for mist
Users that are interested in mist are comparing it to the libraries listed below
Sorting:
- Code of paper [CVPR'24: Can Protective Perturbation Safeguard Personal Data from Being Exploited by Stable Diffusion?]β23Apr 2, 2024Updated last year
- π‘οΈ[ICLR'2024] Toward effective protection against diffusion-based mimicry through score distillation, a.k.a SDS-Attackβ61Apr 7, 2024Updated last year
- [CVPR'24 Oral] Metacloak: Preventing Unauthorized Subject-driven Text-to-image Diffusion-based Synthesis via Meta-learningβ28Nov 19, 2024Updated last year
- code of paper "IMPRESS: Evaluating the Resilience of Imperceptible Perturbations Against Unauthorized Data Usage in Diffusion-Based Geneβ¦β34May 23, 2024Updated last year
- β28Aug 7, 2024Updated last year
- [CVPR 2024] official code for SimACβ21Jan 23, 2025Updated last year
- Anti-DreamBooth: Protecting users from personalized text-to-image synthesis (ICCV 2023)β262Sep 30, 2025Updated 5 months ago
- A watermarking tool to protect human artworks from being used in synthetic artworks.β619May 30, 2025Updated 9 months ago
- Disrupting Diffusion: Token-Level Attention Erasure Attack against Diffusion-based Customization(ACM MM2024)β18Mar 31, 2025Updated 11 months ago
- Investigating and Defending Shortcut Learning in Personalized Diffusion Modelsβ13Nov 19, 2024Updated last year
- An unrestricted attack based on diffusion models that can achieve both good transferability and imperceptibility.β256Nov 23, 2025Updated 3 months ago
- β49Jun 19, 2024Updated last year
- [NeurIPS 2023] Codes for DiffAttack: Evasion Attacks Against Diffusion-Based Adversarial Purificationβ39Feb 29, 2024Updated 2 years ago
- PDM-based Purifierβ22Nov 5, 2024Updated last year
- [NeurIPS'2023] Official Code Repo:Diffusion-Based Adversarial Sample Generation for Improved Stealthiness and Controllabilityβ116Oct 31, 2023Updated 2 years ago
- Pytorch implementation for the pilot study on the robustness of latent diffusion models.β13Jun 20, 2023Updated 2 years ago
- Code implementation for "CGI-DM: Digital Copyright Authentication for Diffusion Models via Contrasting Gradient Inversion" (CVPR 2024)β16Mar 25, 2024Updated last year
- A new adversarial purification method that uses the forward and reverse processes of diffusion models to remove adversarial perturbationsβ¦β334Jan 29, 2023Updated 3 years ago
- Code of the paper: A Recipe for Watermarking Diffusion Modelsβ155Nov 13, 2024Updated last year
- Official code for "Boosting the Adversarial Transferability of Surrogate Model with Dark Knowledge"β12Dec 22, 2023Updated 2 years ago
- β16Jan 28, 2024Updated 2 years ago
- β14Dec 31, 2024Updated last year
- Official implementation of "Prompt-Agnostic Adversarial Perturbation for Customized Diffusion Models"β25May 30, 2025Updated 9 months ago
- A curated list of awesome Unlearnable Example papers resources.β13Dec 14, 2025Updated 2 months ago
- (AAAI 24) Step Vulnerability Guided Mean Fluctuation Adversarial Attack against Conditional Diffusion Modelsβ11Oct 12, 2024Updated last year
- Official repo to reproduce the paper "How to Backdoor Diffusion Models?" published at CVPR 2023β96Sep 17, 2025Updated 5 months ago
- β33Nov 4, 2023Updated 2 years ago
- The official implementation of "Intellectual Property Protection of Diffusion Models via the Watermark Diffusion Process"β20Feb 18, 2025Updated last year
- AdvDiffuser: Natural Adversarial Example Synthesis with Diffusion Models (ICCV 2023)β19Jul 22, 2023Updated 2 years ago
- β65Sep 29, 2024Updated last year
- β16Feb 23, 2025Updated last year
- [ICLR 2023] Official repository of the paper "Rethinking the Effect of Data Augmentation in Adversarial Contrastive Learning"β18Feb 19, 2023Updated 3 years ago
- Spectrum simulation attack (ECCV'2022 Oral) towards boosting the transferability of adversarial examplesβ115Jul 21, 2022Updated 3 years ago
- [AAAI-2024] Official code for work "Adv-Diffusion: Imperceptible Adversarial Face Identity Attack via Latent Diffusion Model"β60Aug 17, 2024Updated last year
- [ICML 2023] "NeRFool: Uncovering the Vulnerability of Generalizable Neural Radiance Fields against Adversarial Perturbations" by Yonggan β¦β18Mar 10, 2024Updated last year
- β47Nov 17, 2022Updated 3 years ago
- β59Nov 24, 2022Updated 3 years ago
- β19Apr 23, 2025Updated 10 months ago
- [ICML 2023] Are Diffusion Models Vulnerable to Membership Inference Attacks?β43Sep 4, 2024Updated last year