β49Jun 19, 2024Updated last year
Alternatives and similar repositories for robust-style-mimicry
Users that are interested in robust-style-mimicry are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Code of paper [CVPR'24: Can Protective Perturbation Safeguard Personal Data from Being Exploited by Stable Diffusion?]β23Apr 2, 2024Updated last year
- π‘οΈ[ICLR'2024] Toward effective protection against diffusion-based mimicry through score distillation, a.k.a SDS-Attackβ62Apr 7, 2024Updated last year
- code of paper "IMPRESS: Evaluating the Resilience of Imperceptible Perturbations Against Unauthorized Data Usage in Diffusion-Based Geneβ¦β35May 23, 2024Updated last year
- Code for the paper "Evading Black-box Classifiers Without Breaking Eggs" [SaTML 2024]β21Apr 15, 2024Updated last year
- Disrupting Diffusion: Token-Level Attention Erasure Attack against Diffusion-based Customization(ACM MM2024)β18Mar 31, 2025Updated 11 months ago
- 1-Click AI Models by DigitalOcean Gradient β’ AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click and start building anything your business needs.
- Investigating and Defending Shortcut Learning in Personalized Diffusion Modelsβ13Nov 19, 2024Updated last year
- Official implementation of "Prompt-Agnostic Adversarial Perturbation for Customized Diffusion Models"β25May 30, 2025Updated 9 months ago
- [CVPR 2024] official code for SimACβ21Jan 23, 2025Updated last year
- Glaze is a tool to help artists to prevent their artistic styles from being learned and mimicked by new AI-art models such as MidJourney,β¦β33Mar 24, 2023Updated 3 years ago
- Official repository for the paper "Gradient-based Jailbreak Images for Multimodal Fusion Models" (https//arxiv.org/abs/2410.03489)β19Oct 22, 2024Updated last year
- DiffusionGuard: A Robust Defense Against Malicious Diffusion-based Image Editing (ICLR 2025)β44May 18, 2025Updated 10 months ago
- Pytorch implementation for the pilot study on the robustness of latent diffusion models.β12Jun 20, 2023Updated 2 years ago
- β30Jun 19, 2023Updated 2 years ago
- β36May 21, 2025Updated 10 months ago
- Simple, predictable pricing with DigitalOcean hosting β’ AdAlways know what you'll pay with monthly caps and flat pricing. Enterprise-grade infrastructure trusted by 600k+ customers.
- The official implementation of "Intellectual Property Protection of Diffusion Models via the Watermark Diffusion Process"β20Feb 18, 2025Updated last year
- [CVPR'25]Chain of Attack: On the Robustness of Vision-Language Models Against Transfer-Based Adversarial Attacksβ31Jun 12, 2025Updated 9 months ago
- Anti-DreamBooth: Protecting users from personalized text-to-image synthesis (ICCV 2023)β268Updated this week
- [NeurIPS 2023] Differentially Private Image Classification by Learning Priors from Random Processesβ12Jun 12, 2023Updated 2 years ago
- Edit Away and My Face Will not Stay: Personal Biometric Defense against Malicious Generative Editingβ56Dec 17, 2024Updated last year
- A new adversarial purification method that uses the forward and reverse processes of diffusion models to remove adversarial perturbationsβ¦β336Jan 29, 2023Updated 3 years ago
- PDM-based Purifierβ22Nov 5, 2024Updated last year
- Single Image Backdoor Inversion via Robust Smoothed Classifiersβ17Jul 18, 2023Updated 2 years ago
- Official repo for [CVPR2025 Oral] Black-Box Forgery Attacks on Semantic Watermarks for Diffusion Modelsβ34Nov 19, 2025Updated 4 months ago
- Wordpress hosting with auto-scaling on Cloudways β’ AdFully Managed hosting built for WordPress-powered businesses that need reliable, auto-scalable hosting. Cloudways SafeUpdates now available.
- PyTorch implementation of our ICLR 2023 paper titled "Is Adversarial Training Really a Silver Bullet for Mitigating Data Poisoning?".β12Mar 13, 2023Updated 3 years ago
- Watermark you artworks to stay away from unauthorized diffusion style mimicry!β359May 30, 2025Updated 9 months ago
- Provable Worst Case Guarantees for the Detection of Out-of-Distribution Dataβ13Sep 20, 2022Updated 3 years ago
- [ECCV2024] Immunizing text-to-image Models against Malicious Adaptationβ17Jan 17, 2025Updated last year
- [ICLR 2022 official code] Robust Learning Meets Generative Models: Can Proxy Distributions Improve Adversarial Robustness?β29Mar 15, 2022Updated 4 years ago
- [CVPR-25π₯] Test-time Counterattacks (TTC) towards adversarial robustness of CLIPβ40Jun 4, 2025Updated 9 months ago
- On the effectiveness of adversarial training against common corruptions [UAI 2022]β30May 16, 2022Updated 3 years ago
- [NeurIPS 2023] Content-based Unrestricted Adversarial Attackβ31Jul 21, 2025Updated 8 months ago
- [ICML 2024] Unsupervised Adversarial Fine-Tuning of Vision Embeddings for Robust Large Vision-Language Modelsβ157Feb 19, 2026Updated last month
- NordVPN Threat Protection Proβ’ β’ AdTake your cybersecurity to the next level. Block phishing, malware, trackers, and ads. Lightweight app that works with all browsers.
- β15Dec 7, 2021Updated 4 years ago
- Official repo for the paper "Make Some Noise: Reliable and Efficient Single-Step Adversarial Training" (https://arxiv.org/abs/2202.01181)β25Oct 17, 2022Updated 3 years ago
- TraceableSpeech: Towards Proactively Traceable Text-to-Speech with Watermarkingβ21Apr 18, 2025Updated 11 months ago
- [ICLR 2023] Official repository of the paper "Rethinking the Effect of Data Augmentation in Adversarial Contrastive Learning"β19Feb 19, 2023Updated 3 years ago
- Code for paper "Universal Jailbreak Backdoors from Poisoned Human Feedback"β65Apr 24, 2024Updated last year
- β23Jul 29, 2025Updated 8 months ago
- This is an unofficial implementation of the Paper by Kejiang Chen et.al. on Gaussian Shading: Provable Performance-Lossless Image Watermaβ¦β38Aug 6, 2024Updated last year