zhaojunGUO / Awesome-LLM-WatermarkLinks
Watermarking LLM papers up-to-date
☆13Updated 2 years ago
Alternatives and similar repositories for Awesome-LLM-Watermark
Users that are interested in Awesome-LLM-Watermark are comparing it to the libraries listed below
Sorting:
- [MM'23 Oral] "Text-to-image diffusion models can be easily backdoored through multimodal data poisoning"☆31Updated 4 months ago
- Official Implementation for "Towards Reliable Verification of Unauthorized Data Usage in Personalized Text-to-Image Diffusion Models" (IE…☆26Updated 9 months ago
- ☆28Updated 2 years ago
- A collection of resources on attacks and defenses targeting text-to-image diffusion models☆87Updated 2 weeks ago
- ☆23Updated last year
- Implementation of BadCLIP https://arxiv.org/pdf/2311.16194.pdf☆23Updated last year
- ☆32Updated last month
- [NDSS 2025] Official code for our paper "Explanation as a Watermark: Towards Harmless and Multi-bit Model Ownership Verification via Wate…☆45Updated last year
- Source code of paper "An Unforgeable Publicly Verifiable Watermark for Large Language Models" accepted by ICLR 2024☆34Updated last year
- [NeurIPS 2025 D&B] BackdoorDM: A Comprehensive Benchmark for Backdoor Learning in Diffusion Model☆23Updated 5 months ago
- All code and data necessary to replicate experiments in the paper BAGM: A Backdoor Attack for Manipulating Text-to-Image Generative Model…☆13Updated last year
- ☆46Updated 3 years ago
- ☆20Updated 3 years ago
- (AAAI 24) Step Vulnerability Guided Mean Fluctuation Adversarial Attack against Conditional Diffusion Models☆11Updated last year
- This is an official repository for Practical Membership Inference Attacks Against Large-Scale Multi-Modal Models: A Pilot Study (ICCV2023…☆24Updated 2 years ago
- code of paper "IMPRESS: Evaluating the Resilience of Imperceptible Perturbations Against Unauthorized Data Usage in Diffusion-Based Gene…☆34Updated last year
- Official repo to reproduce the paper "How to Backdoor Diffusion Models?" published at CVPR 2023☆94Updated 3 months ago
- [AAAI 2024] Data-Free Hard-Label Robustness Stealing Attack☆15Updated last year
- Code Repo for the NeurIPS 2023 paper "VillanDiffusion: A Unified Backdoor Attack Framework for Diffusion Models"☆27Updated 3 months ago
- WaNet - Imperceptible Warping-based Backdoor Attack (ICLR 2021)☆132Updated last year
- ☆79Updated last year
- CVPR 2025 - Anyattack: Towards Large-scale Self-supervised Adversarial Attacks on Vision-language Models☆61Updated 4 months ago
- Adversarial Attacks against Closed-Source MLLMs via Feature Optimal Alignment (NeurIPS 2025)☆43Updated last month
- [ICCV-2023] Gradient inversion attack, Federated learning, Generative adversarial network.☆50Updated last year
- A Survey on Jailbreak Attacks and Defenses against Multimodal Generative Models☆292Updated last month
- [CVPR 2024] Gaussian Shading: Provable Performance-Lossless Image Watermarking for Diffusion Models☆131Updated last year
- ☆32Updated last year
- Code for paper: "PromptCARE: Prompt Copyright Protection by Watermark Injection and Verification", IEEE S&P 2024.☆34Updated last year
- Robust natural language watermarking using invariant features☆28Updated 2 years ago
- Accept by CVPR 2025 (highlight)☆22Updated 6 months ago