LukasStruppek / Rickrolling-the-ArtistView external linksLinks
[ICCV 2023] Source code for our paper "Rickrolling the Artist: Injecting Invisible Backdoors into Text-Guided Image Generation Models".
☆65Nov 20, 2023Updated 2 years ago
Alternatives and similar repositories for Rickrolling-the-Artist
Users that are interested in Rickrolling-the-Artist are comparing it to the libraries listed below
Sorting:
- [MM'23 Oral] "Text-to-image diffusion models can be easily backdoored through multimodal data poisoning"☆31Aug 14, 2025Updated 6 months ago
- All code and data necessary to replicate experiments in the paper BAGM: A Backdoor Attack for Manipulating Text-to-Image Generative Model…☆13Sep 16, 2024Updated last year
- [MM '24] EvilEdit: Backdooring Text-to-Image Diffusion Models in One Second☆28Nov 19, 2024Updated last year
- ☆10Dec 18, 2024Updated last year
- Code Repo for the NeurIPS 2023 paper "VillanDiffusion: A Unified Backdoor Attack Framework for Diffusion Models"☆27Sep 18, 2025Updated 4 months ago
- [ECCV'24] T2IShield: Defending Against Backdoors on Text-to-Image Diffusion Models☆17Dec 21, 2025Updated last month
- [NeurIPS 2025 D&B] BackdoorDM: A Comprehensive Benchmark for Backdoor Learning in Diffusion Model☆24Aug 1, 2025Updated 6 months ago
- Official repo to reproduce the paper "How to Backdoor Diffusion Models?" published at CVPR 2023☆96Sep 17, 2025Updated 4 months ago
- ☆59Nov 24, 2022Updated 3 years ago
- Source Code for the JAIR Paper "Does CLIP Know my Face?" (Demo: https://huggingface.co/spaces/AIML-TUDA/does-clip-know-my-face)☆16Jul 9, 2024Updated last year
- The official implementation of the paper "Free Fine-tuning: A Plug-and-Play Watermarking Scheme for Deep Neural Networks".☆19Apr 19, 2024Updated last year
- ☆13May 1, 2024Updated last year
- ☆16Feb 23, 2025Updated 11 months ago
- ☆32Mar 4, 2022Updated 3 years ago
- Implementation of BadCLIP https://arxiv.org/pdf/2311.16194.pdf☆23Mar 23, 2024Updated last year
- DiffWA: Diffusion Models for Watermark Attack☆10Apr 23, 2024Updated last year
- ☆10Oct 31, 2022Updated 3 years ago
- This code is the official implementation of WEvade.☆41Mar 12, 2024Updated last year
- ☆12Mar 5, 2024Updated last year
- Code for the CVPR '23 paper, "Defending Against Patch-based Backdoor Attacks on Self-Supervised Learning"☆10Jun 9, 2023Updated 2 years ago
- ☆13Jan 23, 2026Updated 3 weeks ago
- [NeurIPS 2024] Fight Back Against Jailbreaking via Prompt Adversarial Tuning☆10Oct 29, 2024Updated last year
- [CVPR2025] We present SleeperMark, a novel framework designed to embed resilient watermarks into T2I diffusion models☆37May 26, 2025Updated 8 months ago
- ☆13Jun 1, 2024Updated last year
- this is for the ACM MM paper---Backdoor Attack on Crowd Counting☆17Jul 10, 2022Updated 3 years ago
- [NeurIPS 2024] Source code for our paper "Finding NeMo: Localizing Neurons Responsible For Memorization in Diffusion Models".☆13Jul 18, 2025Updated 6 months ago
- ☆14Jan 4, 2025Updated last year
- Code for paper "Universal Jailbreak Backdoors from Poisoned Human Feedback"☆66Apr 24, 2024Updated last year
- Source code for ECCV 2022 Poster: Data-free Backdoor Removal based on Channel Lipschitzness☆35Jan 9, 2023Updated 3 years ago
- Glaze is a tool to help artists to prevent their artistic styles from being learned and mimicked by new AI-art models such as MidJourney,…☆32Mar 24, 2023Updated 2 years ago
- [ICLR2025] Detecting Backdoor Samples in Contrastive Language Image Pretraining☆19Feb 26, 2025Updated 11 months ago
- A Watermark-Conditioned Diffusion Model for IP Protection (ECCV 2024)☆34Apr 5, 2025Updated 10 months ago
- ☆15Dec 12, 2022Updated 3 years ago
- ☆13Oct 21, 2021Updated 4 years ago
- Official Code for reproductivity of the NeurIPS 2023 paper: Adversarial Examples Are Not Real Features☆16Jun 27, 2024Updated last year
- How well can Text-to-Image Generative Models understand Ethical Natural Language Interventions?☆13Aug 16, 2023Updated 2 years ago
- Codes for NeurIPS 2021 paper "Adversarial Neuron Pruning Purifies Backdoored Deep Models"☆63May 8, 2023Updated 2 years ago
- Source code of paper "An Unforgeable Publicly Verifiable Watermark for Large Language Models" accepted by ICLR 2024☆34May 23, 2024Updated last year
- ☆21Sep 16, 2024Updated last year