JJ-Vice / BAGMView external linksLinks
All code and data necessary to replicate experiments in the paper BAGM: A Backdoor Attack for Manipulating Text-to-Image Generative Models.
☆13Sep 16, 2024Updated last year
Alternatives and similar repositories for BAGM
Users that are interested in BAGM are comparing it to the libraries listed below
Sorting:
- ☆10Dec 18, 2024Updated last year
- Code Repo for the NeurIPS 2023 paper "VillanDiffusion: A Unified Backdoor Attack Framework for Diffusion Models"☆27Sep 18, 2025Updated 4 months ago
- [ICCV 2023] Source code for our paper "Rickrolling the Artist: Injecting Invisible Backdoors into Text-Guided Image Generation Models".☆65Nov 20, 2023Updated 2 years ago
- [NeurIPS 2025 D&B] BackdoorDM: A Comprehensive Benchmark for Backdoor Learning in Diffusion Model☆24Aug 1, 2025Updated 6 months ago
- [MM'23 Oral] "Text-to-image diffusion models can be easily backdoored through multimodal data poisoning"☆31Aug 14, 2025Updated 5 months ago
- [MM '24] EvilEdit: Backdooring Text-to-Image Diffusion Models in One Second☆27Nov 19, 2024Updated last year
- ☆14Jan 4, 2025Updated last year
- The official implementation of the paper "Free Fine-tuning: A Plug-and-Play Watermarking Scheme for Deep Neural Networks".☆19Apr 19, 2024Updated last year
- Source code of paper "An Unforgeable Publicly Verifiable Watermark for Large Language Models" accepted by ICLR 2024☆34May 23, 2024Updated last year
- Official implementation of "Poisoned Forgery Face: Towards Backdoor Attacks on Face Forgery Detection" (ICLR 2024)☆18Apr 15, 2024Updated last year
- ☆16Dec 3, 2021Updated 4 years ago
- Implementation for Poison Attacks against Text Datasets with Conditional Adversarially Regularized Autoencoder (EMNLP-Findings 2020)☆15Oct 8, 2020Updated 5 years ago
- Code for paper: PoisonPrompt: Backdoor Attack on Prompt-based Large Language Models, IEEE ICASSP 2024. Demo//124.220.228.133:11107☆20Aug 10, 2024Updated last year
- Implementation of BadCLIP https://arxiv.org/pdf/2311.16194.pdf☆23Mar 23, 2024Updated last year
- WOUAF: Weight Modulation for User Attribution and Fingerprinting in Text-to-Image Diffusion Models (CVPR 2024)☆25Jun 14, 2024Updated last year
- Latent Watermark: Inject and Detect Watermarks in Latent Diffusion Space☆23Jan 9, 2025Updated last year
- [CVPR'24 Oral] Metacloak: Preventing Unauthorized Subject-driven Text-to-image Diffusion-based Synthesis via Meta-learning☆28Nov 19, 2024Updated last year
- ☆33Jun 14, 2023Updated 2 years ago
- [CVPR2025] We present SleeperMark, a novel framework designed to embed resilient watermarks into T2I diffusion models☆37May 26, 2025Updated 8 months ago
- Official repository of the paper: Marking Code Without Breaking It: Code Watermarking for Detecting LLM-Generated Code☆12Oct 7, 2025Updated 4 months ago
- A Watermark-Conditioned Diffusion Model for IP Protection (ECCV 2024)☆34Apr 5, 2025Updated 10 months ago
- [ICLR2023] Distilling Cognitive Backdoor Patterns within an Image☆36Oct 29, 2025Updated 3 months ago
- ☆12May 6, 2022Updated 3 years ago
- ☆32Mar 4, 2022Updated 3 years ago
- This is an unofficial implementation of the Paper by Kejiang Chen et.al. on Gaussian Shading: Provable Performance-Lossless Image Waterma…☆38Aug 6, 2024Updated last year
- APBench: A Unified Availability Poisoning Attack and Defenses Benchmark (TMLR 08/2024)☆46Apr 15, 2025Updated 9 months ago
- This is the code repo of our Pattern Recognition journal on IPR protection of Image Captioning Models☆11Aug 29, 2023Updated 2 years ago
- ☆11Jan 25, 2019Updated 7 years ago
- [IEEE TIP] Offical implementation for the work "BadCM: Invisible Backdoor Attack against Cross-Modal Learning".☆14Aug 30, 2024Updated last year
- ☆11Oct 30, 2024Updated last year
- ☆11Dec 9, 2018Updated 7 years ago
- This work corroborates a run-time Trojan detection method exploiting STRong Intentional Perturbation of inputs, is a multi-domain Trojan …☆10Mar 7, 2021Updated 4 years ago
- This repository contains the official implementation (PyTorch) of "Multimodal Forgery Detection Using Ensemble Learning" proposed in APSI…☆10Jan 4, 2023Updated 3 years ago
- The implementation of our IEEE S&P 2024 paper "Securely Fine-tuning Pre-trained Encoders Against Adversarial Examples".☆11Jun 28, 2024Updated last year
- Code for the paper "Overconfidence is a Dangerous Thing: Mitigating Membership Inference Attacks by Enforcing Less Confident Prediction" …☆12Sep 6, 2023Updated 2 years ago
- [NeurIPS'25] Backdoor Cleaning without External Guidance in MLLM Fine-tuning☆17Oct 13, 2025Updated 4 months ago
- ☆14Feb 26, 2025Updated 11 months ago
- ☆10Feb 10, 2021Updated 5 years ago
- springboot auto xss☆11May 23, 2018Updated 7 years ago