MadryLab / photoguardView external linksLinks
Raising the Cost of Malicious AI-Powered Image Editing
☆652Feb 27, 2023Updated 2 years ago
Alternatives and similar repositories for photoguard
Users that are interested in photoguard are comparing it to the libraries listed below
Sorting:
- Anti-DreamBooth: Protecting users from personalized text-to-image synthesis (ICCV 2023)☆261Sep 30, 2025Updated 4 months ago
- Code of paper [CVPR'24: Can Protective Perturbation Safeguard Personal Data from Being Exploited by Stable Diffusion?]☆23Apr 2, 2024Updated last year
- code of paper "IMPRESS: Evaluating the Resilience of Imperceptible Perturbations Against Unauthorized Data Usage in Diffusion-Based Gene…☆34May 23, 2024Updated last year
- Watermark you artworks to stay away from unauthorized diffusion style mimicry!☆356May 30, 2025Updated 8 months ago
- 🛡️[ICLR'2024] Toward effective protection against diffusion-based mimicry through score distillation, a.k.a SDS-Attack☆59Apr 7, 2024Updated last year
- A new adversarial purification method that uses the forward and reverse processes of diffusion models to remove adversarial perturbations…☆334Jan 29, 2023Updated 3 years ago
- [CVPR 2024] official code for SimAC☆21Jan 23, 2025Updated last year
- Pytorch implementation for the pilot study on the robustness of latent diffusion models.☆13Jun 20, 2023Updated 2 years ago
- ☆26Nov 7, 2023Updated 2 years ago
- ☆48Jun 19, 2024Updated last year
- [NeurIPS-2023] Annual Conference on Neural Information Processing Systems☆227Dec 22, 2024Updated last year
- Generalized Data-free Universal Adversarial Perturbations in PyTorch☆20Oct 9, 2020Updated 5 years ago
- A tool for plotting processes accessing the network☆91Jul 1, 2022Updated 3 years ago
- [NeurIPS 2023] Codes for DiffAttack: Evasion Attacks Against Diffusion-Based Adversarial Purification☆39Feb 29, 2024Updated last year
- ☆646Aug 4, 2023Updated 2 years ago
- Proof of Work protected TCP server.☆29Nov 8, 2023Updated 2 years ago
- Erasing Concepts from Diffusion Models☆655Aug 18, 2025Updated 5 months ago
- This repository is the official implementation of [Natural Color Fool: Towards Boosting Black-box Unrestricted Attacks (NeurIPS'22)](http…☆26Feb 13, 2023Updated 3 years ago
- Official Implementation of Safe Latent Diffusion for Text2Image☆94Apr 21, 2023Updated 2 years ago
- An unrestricted attack based on diffusion models that can achieve both good transferability and imperceptibility.☆256Nov 23, 2025Updated 2 months ago
- Convert any binary data to a PNG image file and vice versa.☆135Dec 25, 2023Updated 2 years ago
- [AAAI 2022] CMUA-Watermark: A Cross-Model Universal Adversarial Watermark for Combating Deepfakes☆108May 6, 2024Updated last year
- ☆16Jul 25, 2022Updated 3 years ago
- Chrome Extension allowing users to preview links and images☆73Apr 6, 2024Updated last year
- [CVPR23W] "A Pilot Study of Query-Free Adversarial Attack against Stable Diffusion" by Haomin Zhuang, Yihua Zhang and Sijia Liu☆26Aug 27, 2024Updated last year
- ☆80Jul 23, 2024Updated last year
- [AAAI 2021] Initiative Defense against Facial Manipulation☆38Jun 14, 2023Updated 2 years ago
- [NeurIPS'2023] Official Code Repo:Diffusion-Based Adversarial Sample Generation for Improved Stealthiness and Controllability☆116Oct 31, 2023Updated 2 years ago
- Code of the paper: A Recipe for Watermarking Diffusion Models☆155Nov 13, 2024Updated last year
- ☆59Nov 24, 2022Updated 3 years ago
- Investigating and Defending Shortcut Learning in Personalized Diffusion Models☆13Nov 19, 2024Updated last year
- Disrupting Diffusion: Token-Level Attention Erasure Attack against Diffusion-based Customization(ACM MM2024)☆18Mar 31, 2025Updated 10 months ago
- [NeurIPS2021] Code Release of Learning Transferable Perturbations☆29Dec 7, 2024Updated last year
- Remove silence from video files with a 1-line ffmpeg command☆182Sep 16, 2022Updated 3 years ago
- ☆65Sep 29, 2024Updated last year
- A fast, effective data attribution method for neural networks in PyTorch☆229Nov 18, 2024Updated last year
- Patch-wise iterative attack (accepted by ECCV 2020) to improve the transferability of adversarial examples.☆94Mar 13, 2022Updated 3 years ago
- Universal and Transferable Attacks on Aligned Language Models☆4,489Aug 2, 2024Updated last year
- ☆21Mar 14, 2022Updated 3 years ago