zhaojunGUO / Awesome-LLM-WatermarkLinks
Watermarking LLM papers up-to-date
☆13Updated last year
Alternatives and similar repositories for Awesome-LLM-Watermark
Users that are interested in Awesome-LLM-Watermark are comparing it to the libraries listed below
Sorting:
- [MM'23 Oral] "Text-to-image diffusion models can be easily backdoored through multimodal data poisoning"☆31Updated 3 months ago
- A collection of resources on attacks and defenses targeting text-to-image diffusion models☆84Updated 8 months ago
- ☆23Updated last year
- Official Implementation for "Towards Reliable Verification of Unauthorized Data Usage in Personalized Text-to-Image Diffusion Models" (IE…☆26Updated 8 months ago
- ☆32Updated 3 weeks ago
- ☆28Updated 2 years ago
- Implementation of BadCLIP https://arxiv.org/pdf/2311.16194.pdf☆24Updated last year
- [NeurIPS 2025 D&B] BackdoorDM: A Comprehensive Benchmark for Backdoor Learning in Diffusion Model☆23Updated 4 months ago
- All code and data necessary to replicate experiments in the paper BAGM: A Backdoor Attack for Manipulating Text-to-Image Generative Model…☆13Updated last year
- [NDSS 2025] Official code for our paper "Explanation as a Watermark: Towards Harmless and Multi-bit Model Ownership Verification via Wate…☆45Updated last year
- ☆20Updated 3 years ago
- [CCS'22] SSLGuard: A Watermarking Scheme for Self-supervised Learning Pre-trained Encoders☆19Updated 3 years ago
- Code Repo for the NeurIPS 2023 paper "VillanDiffusion: A Unified Backdoor Attack Framework for Diffusion Models"☆27Updated 2 months ago
- [CVPR23W] "A Pilot Study of Query-Free Adversarial Attack against Stable Diffusion" by Haomin Zhuang, Yihua Zhang and Sijia Liu☆26Updated last year
- Adversarial Attacks against Closed-Source MLLMs via Feature Optimal Alignment (NeurIPS 2025)☆41Updated last month
- Source code of paper "An Unforgeable Publicly Verifiable Watermark for Large Language Models" accepted by ICLR 2024☆34Updated last year
- ☆20Updated 2 years ago
- Robust natural language watermarking using invariant features☆28Updated 2 years ago
- This is an official repository for Practical Membership Inference Attacks Against Large-Scale Multi-Modal Models: A Pilot Study (ICCV2023…☆24Updated 2 years ago
- WaNet - Imperceptible Warping-based Backdoor Attack (ICLR 2021)☆129Updated last year
- ☆29Updated last year
- ☆80Updated last year
- [ICLR 2024] Inducing High Energy-Latency of Large Vision-Language Models with Verbose Images☆41Updated last year
- Official repo to reproduce the paper "How to Backdoor Diffusion Models?" published at CVPR 2023☆94Updated 2 months ago
- Divide-and-Conquer Attack: Harnessing the Power of LLM to Bypass the Censorship of Text-to-Image Generation Mode☆18Updated 9 months ago
- ☆18Updated 3 years ago
- ☆26Updated 2 years ago
- CVPR 2025 - Anyattack: Towards Large-scale Self-supervised Adversarial Attacks on Vision-language Models☆60Updated 4 months ago
- This is an official repository of ``VLAttack: Multimodal Adversarial Attacks on Vision-Language Tasks via Pre-trained Models'' (NeurIPS 2…☆63Updated 8 months ago
- [AAAI 2024] Data-Free Hard-Label Robustness Stealing Attack☆15Updated last year