psyker-team / mistLinks
Watermark you artworks to stay away from unauthorized diffusion style mimicry!
☆346Updated 3 months ago
Alternatives and similar repositories for mist
Users that are interested in mist are comparing it to the libraries listed below
Sorting:
- A watermarking tool to protect human artworks from being used in synthetic artworks.☆539Updated 3 months ago
- Code of the paper: A Recipe for Watermarking Diffusion Models☆150Updated 10 months ago
- Official implementation of the paper "The Stable Signature Rooting Watermarks in Latent Diffusion Models"☆463Updated 8 months ago
- Anti-DreamBooth: Protecting users from personalized text-to-image synthesis (ICCV 2023)☆247Updated 3 weeks ago
- ☆42Updated last year
- 🛡️[ICLR'2024] Toward effective protection against diffusion-based mimicry through score distillation, a.k.a SDS-Attack☆55Updated last year
- Official repo to reproduce the paper "How to Backdoor Diffusion Models?" published at CVPR 2023☆92Updated 4 months ago
- ☆22Updated last year
- code of paper "IMPRESS: Evaluating the Resilience of Imperceptible Perturbations Against Unauthorized Data Usage in Diffusion-Based Gene…☆35Updated last year
- This repository contains the implementation for the paper "AquaLoRA: Toward White-box Protection for Customized Stable Diffusion Models v…☆52Updated last year
- ☆315Updated last year
- A curated list of watermarking schemes for generative AI models☆107Updated 3 months ago
- [CVPR'24 Oral] Metacloak: Preventing Unauthorized Subject-driven Text-to-image Diffusion-based Synthesis via Meta-learning☆29Updated 9 months ago
- ☆43Updated last year
- [CVPR 2024] Gaussian Shading: Provable Performance-Lossless Image Watermarking for Diffusion Models☆115Updated last year
- 🛡 A curated list of adversarial attacks in PyTorch, with a focus on transferable black-box attacks.☆62Updated last month
- [ICCV 2023] Source code for our paper "Rickrolling the Artist: Injecting Invisible Backdoors into Text-Guided Image Generation Models".☆63Updated last year
- ☆28Updated last year
- Code of paper [CVPR'24: Can Protective Perturbation Safeguard Personal Data from Being Exploited by Stable Diffusion?]☆22Updated last year
- A collection of resources on attacks and defenses targeting text-to-image diffusion models☆73Updated 5 months ago
- [NeurIPS 2024 D&B Track] UnlearnCanvas: A Stylized Image Dataset to Benchmark Machine Unlearning for Diffusion Models by Yihua Zhang, Cho…☆75Updated 10 months ago
- [NeurIPS'2023] Official Code Repo:Diffusion-Based Adversarial Sample Generation for Improved Stealthiness and Controllability☆108Updated last year
- [CVPR 2024] official code for SimAC☆21Updated 7 months ago
- The official implementation of ECCV'24 paper "To Generate or Not? Safety-Driven Unlearned Diffusion Models Are Still Easy To Generate Uns…☆80Updated 6 months ago
- Erasing Concepts from Diffusion Models☆634Updated 3 weeks ago
- ☆33Updated last year
- Code for our paper "Benchmarking the Robustness of Image Watermarks"☆89Updated 11 months ago
- Code implementation for "CGI-DM: Digital Copyright Authentication for Diffusion Models via Contrasting Gradient Inversion" (CVPR 2024)☆14Updated last year
- [MM'23 Oral] "Text-to-image diffusion models can be easily backdoored through multimodal data poisoning"☆31Updated 3 weeks ago
- Auto1111 port of NVlab's adversarial purification method that uses the forward and reverse processes of diffusion models to remove advers…☆13Updated 2 years ago