zhaojunGUO / Awesome-LLM-Watermark
Watermarking LLM papers up-to-date
☆13Updated last year
Alternatives and similar repositories for Awesome-LLM-Watermark
Users that are interested in Awesome-LLM-Watermark are comparing it to the libraries listed below
Sorting:
- [AAAI 2024] Data-Free Hard-Label Robustness Stealing Attack☆12Updated last year
- [MM'23 Oral] "Text-to-image diffusion models can be easily backdoored through multimodal data poisoning"☆28Updated 2 months ago
- ☆19Updated 2 years ago
- ☆18Updated last year
- ☆44Updated last year
- Implementation of BadCLIP https://arxiv.org/pdf/2311.16194.pdf☆20Updated last year
- Robust natural language watermarking using invariant features☆25Updated last year
- ☆27Updated last month
- ☆9Updated 3 years ago
- [CVPR 2023] The official implementation of our CVPR 2023 paper "Detecting Backdoors During the Inference Stage Based on Corruption Robust…☆23Updated last year
- All code and data necessary to replicate experiments in the paper BAGM: A Backdoor Attack for Manipulating Text-to-Image Generative Model…☆11Updated 8 months ago
- ☆35Updated 3 years ago
- ☆17Updated 3 years ago
- ☆20Updated last year
- [AAAI 2023] Pseudo Label-Guided Model Inversion Attack via Conditional Generative Adversarial Network☆29Updated 7 months ago
- Code Repo for the NeurIPS 2023 paper "VillanDiffusion: A Unified Backdoor Attack Framework for Diffusion Models"☆23Updated last week
- ☆17Updated last month
- ☆10Updated 5 months ago
- [CCS'22] SSLGuard: A Watermarking Scheme for Self-supervised Learning Pre-trained Encoders☆19Updated 2 years ago
- Official Implementation of ICLR 2022 paper, ``Adversarial Unlearning of Backdoors via Implicit Hypergradient''☆53Updated 2 years ago
- ☆25Updated last year
- Code for paper: "PromptCARE: Prompt Copyright Protection by Watermark Injection and Verification", IEEE S&P 2024.☆32Updated 9 months ago
- Website & Documentation: https://sbaresearch.github.io/model-watermarking/☆23Updated last year
- Code for paper: PoisonPrompt: Backdoor Attack on Prompt-based Large Language Models, IEEE ICASSP 2024. Demo//124.220.228.133:11107☆17Updated 9 months ago
- ☆23Updated 11 months ago
- [CVPR 2023] Backdoor Defense via Adaptively Splitting Poisoned Dataset☆49Updated last year
- ☆18Updated 2 years ago
- Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks☆17Updated 6 years ago
- Stochastic Variance Reduced Ensemble Adversarial Attack for Boosting the Adversarial Transferability☆24Updated 2 years ago
- Code for "Label-Consistent Backdoor Attacks"☆57Updated 4 years ago