AAAAAAsuka / ImpressView external linksLinks
code of paper "IMPRESS: Evaluating the Resilience of Imperceptible Perturbations Against Unauthorized Data Usage in Diffusion-Based Generative AI"
☆34May 23, 2024Updated last year
Alternatives and similar repositories for Impress
Users that are interested in Impress are comparing it to the libraries listed below
Sorting:
- Code of paper [CVPR'24: Can Protective Perturbation Safeguard Personal Data from Being Exploited by Stable Diffusion?]☆23Apr 2, 2024Updated last year
- Investigating and Defending Shortcut Learning in Personalized Diffusion Models☆13Nov 19, 2024Updated last year
- 🛡️[ICLR'2024] Toward effective protection against diffusion-based mimicry through score distillation, a.k.a SDS-Attack☆59Apr 7, 2024Updated last year
- [CVPR 2024] official code for SimAC☆21Jan 23, 2025Updated last year
- [CVPR'24 Oral] Metacloak: Preventing Unauthorized Subject-driven Text-to-image Diffusion-based Synthesis via Meta-learning☆28Nov 19, 2024Updated last year
- ☆48Jun 19, 2024Updated last year
- ☆28Aug 7, 2024Updated last year
- Disrupting Diffusion: Token-Level Attention Erasure Attack against Diffusion-based Customization(ACM MM2024)☆18Mar 31, 2025Updated 10 months ago
- (AAAI 24) Step Vulnerability Guided Mean Fluctuation Adversarial Attack against Conditional Diffusion Models☆11Oct 12, 2024Updated last year
- DiffusionGuard: A Robust Defense Against Malicious Diffusion-based Image Editing (ICLR 2025)☆43May 18, 2025Updated 8 months ago
- Official implementation of "Prompt-Agnostic Adversarial Perturbation for Customized Diffusion Models"☆25May 30, 2025Updated 8 months ago
- [ECCV2024] Immunizing text-to-image Models against Malicious Adaptation☆17Jan 17, 2025Updated last year
- Pytorch implementation for the pilot study on the robustness of latent diffusion models.☆13Jun 20, 2023Updated 2 years ago
- Anti-DreamBooth: Protecting users from personalized text-to-image synthesis (ICCV 2023)☆261Sep 30, 2025Updated 4 months ago
- Code implementation for "CGI-DM: Digital Copyright Authentication for Diffusion Models via Contrasting Gradient Inversion" (CVPR 2024)☆16Mar 25, 2024Updated last year
- (AAAI 2024) Transferable Adversarial Attacks for Object Detection using Object-Aware Significant Feature Distortion☆16Dec 13, 2023Updated 2 years ago
- ☆15Dec 12, 2022Updated 3 years ago
- [NeurIPS2023] Black-box Backdoor Defense via Zero-shot Image Purification☆16Oct 31, 2023Updated 2 years ago
- [ECCV 2024] Towards Reliable Evaluation and Fast Training of Robust Semantic Segmentation Models☆21Jul 17, 2024Updated last year
- [ICLR 2022] Official repository for "Robust Unlearnable Examples: Protecting Data Against Adversarial Learning"☆48Jul 20, 2024Updated last year
- Raising the Cost of Malicious AI-Powered Image Editing☆652Feb 27, 2023Updated 2 years ago
- Official repo to reproduce the paper "How to Backdoor Diffusion Models?" published at CVPR 2023☆96Sep 17, 2025Updated 4 months ago
- ☆23Feb 5, 2026Updated last week
- 🥇 Amazon Nova AI Challenge Winner - ASTRA emerged victorious as the top attacking team in Amazon's global AI safety competition, defeati…☆70Aug 14, 2025Updated 6 months ago
- Official Code for ICLR2022 Paper: Chaos is a Ladder: A New Theoretical Understanding of Contrastive Learning via Augmentation Overlap☆28Sep 28, 2025Updated 4 months ago
- PDM-based Purifier☆22Nov 5, 2024Updated last year
- ☆59Nov 24, 2022Updated 3 years ago
- Input Purification Defense Against Trojan Attacks on Deep Neural Network Systems☆28Apr 1, 2021Updated 4 years ago
- Code for paper "Characterizing Adversarial Subspaces Using Local Intrinsic Dimensionality".☆125Nov 4, 2020Updated 5 years ago
- [NeurIPS 2023] Codes for DiffAttack: Evasion Attacks Against Diffusion-Based Adversarial Purification☆39Feb 29, 2024Updated last year
- [MM'23 Oral] "Text-to-image diffusion models can be easily backdoored through multimodal data poisoning"☆31Aug 14, 2025Updated 6 months ago
- ☆38Jan 15, 2025Updated last year
- collection with description of super-resolution related papers, repositories, datasets, loss functions and etc.☆11Dec 12, 2023Updated 2 years ago
- Diffusion Probabilistic Model in Jax☆11Apr 20, 2024Updated last year
- A curated list of trustworthy Generative AI papers. Daily updating...☆76Sep 4, 2024Updated last year
- [ICML 2024] Assessing the Brittleness of Safety Alignment via Pruning and Low-Rank Modifications☆89Mar 30, 2025Updated 10 months ago
- Implementation of BEAST adversarial attack for language models (ICML 2024)☆90May 14, 2024Updated last year
- The official implementation of paper "TRCE: Towards Reliable Malicious Concept Erasure in Text-to-Image Diffusion Models"☆15Mar 11, 2025Updated 11 months ago
- Official Implementation for CVPR 2025 paper Instant Adversarial Purification with Adversarial Consistency Distillation.☆14Dec 19, 2025Updated last month