grasses / PromptCARE
Code for paper: "PromptCARE: Prompt Copyright Protection by Watermark Injection and Verification", IEEE S&P 2024.
☆30Updated 6 months ago
Alternatives and similar repositories for PromptCARE:
Users that are interested in PromptCARE are comparing it to the libraries listed below
- ☆18Updated 9 months ago
- ☆34Updated 2 years ago
- [CCS'22] SSLGuard: A Watermarking Scheme for Self-supervised Learning Pre-trained Encoders☆18Updated 2 years ago
- ☆20Updated last year
- Text-CRS: A Generalized Certified Robustness Framework against Textual Adversarial Attacks (IEEE S&P 2024)☆34Updated 11 months ago
- ☆79Updated 3 years ago
- ☆17Updated last year
- This is the official implementation of our paper 'Untargeted Backdoor Watermark: Towards Harmless and Stealthy Dataset Copyright Protecti…☆53Updated 10 months ago
- Official Code for ART: Automatic Red-teaming for Text-to-Image Models to Protect Benign Users (NeurIPS 2024)☆13Updated 3 months ago
- Code for paper: PoisonPrompt: Backdoor Attack on Prompt-based Large Language Models, IEEE ICASSP 2024. Demo//124.220.228.133:11107☆14Updated 6 months ago
- Official code for our NDSS paper "Explanation as a Watermark: Towards Harmless and Multi-bit Model Ownership Verification via Watermarkin…☆28Updated 3 months ago
- ☆17Updated 3 years ago
- This is the source code for MEA-Defender. Our paper is accepted by the IEEE Symposium on Security and Privacy (S&P) 2024.☆18Updated last year
- An Embarrassingly Simple Backdoor Attack on Self-supervised Learning☆17Updated last year
- ☆24Updated 6 months ago
- Implementation of IEEE TNNLS 2023 and Elsevier PR 2023 papers on backdoor watermarking for deep classification models with unambiguity an…☆16Updated last year
- A toolbox for backdoor attacks.☆20Updated 2 years ago
- Official Implementation of ICLR 2022 paper, ``Adversarial Unlearning of Backdoors via Implicit Hypergradient''☆54Updated 2 years ago
- ☆18Updated 2 years ago
- [S&P'24] Test-Time Poisoning Attacks Against Test-Time Adaptation Models☆18Updated last year
- [MM'23 Oral] "Text-to-image diffusion models can be easily backdoored through multimodal data poisoning"☆26Updated last month
- This is the implementation for CVPR 2022 Oral paper "Better Trigger Inversion Optimization in Backdoor Scanning."☆24Updated 2 years ago
- ☆31Updated 2 years ago
- ☆18Updated 2 years ago
- Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks☆17Updated 5 years ago
- Robust natural language watermarking using invariant features☆26Updated last year
- [AAAI'21] Deep Feature Space Trojan Attack of Neural Networks by Controlled Detoxification☆28Updated last month
- The official implementation of "Intellectual Property Protection of Diffusion Models via the Watermark Diffusion Process"☆20Updated last year
- Distribution Preserving Backdoor Attack in Self-supervised Learning☆14Updated last year