xlhex / NLG_api_watermarkLinks
☆9Updated 3 years ago
Alternatives and similar repositories for NLG_api_watermark
Users that are interested in NLG_api_watermark are comparing it to the libraries listed below
Sorting:
- Robust natural language watermarking using invariant features☆25Updated last year
- Watermarking LLM papers up-to-date☆15Updated last year
- Official Code for ART: Automatic Red-teaming for Text-to-Image Models to Protect Benign Users (NeurIPS 2024)☆16Updated 7 months ago
- ☆20Updated last year
- Official repository of the paper: Who Wrote this Code? Watermarking for Code Generation (ACL 2024)☆34Updated last year
- Repository for Towards Codable Watermarking for Large Language Models☆37Updated last year
- Code for paper: "PromptCARE: Prompt Copyright Protection by Watermark Injection and Verification", IEEE S&P 2024.☆32Updated 9 months ago
- ☆54Updated 2 weeks ago
- Implementation of BadCLIP https://arxiv.org/pdf/2311.16194.pdf☆20Updated last year
- Repo for SemStamp (NAACL2024) and k-SemStamp (ACL2024)☆20Updated 5 months ago
- Source code of paper "An Unforgeable Publicly Verifiable Watermark for Large Language Models" accepted by ICLR 2024☆34Updated last year
- ☆26Updated 2 years ago
- (AAAI 24) Step Vulnerability Guided Mean Fluctuation Adversarial Attack against Conditional Diffusion Models☆10Updated 7 months ago
- ☆26Updated 3 weeks ago
- A toolbox for backdoor attacks.☆22Updated 2 years ago
- ☆10Updated 5 months ago
- ☆19Updated 2 years ago
- ☆18Updated last year
- Code for paper: PoisonPrompt: Backdoor Attack on Prompt-based Large Language Models, IEEE ICASSP 2024. Demo//124.220.228.133:11107☆17Updated 9 months ago
- ☆27Updated 2 months ago
- ☆15Updated 2 months ago
- [MM'23 Oral] "Text-to-image diffusion models can be easily backdoored through multimodal data poisoning"☆28Updated 3 months ago
- Text-CRS: A Generalized Certified Robustness Framework against Textual Adversarial Attacks (IEEE S&P 2024)☆34Updated last month
- [CVPR23W] "A Pilot Study of Query-Free Adversarial Attack against Stable Diffusion" by Haomin Zhuang, Yihua Zhang and Sijia Liu☆26Updated 9 months ago
- ☆22Updated 9 months ago
- ☆43Updated 6 months ago
- ☆18Updated 2 years ago
- [AAAI 2024] Data-Free Hard-Label Robustness Stealing Attack☆13Updated last year
- ☆42Updated last year
- Comprehensive Assessment of Trustworthiness in Multimodal Foundation Models☆21Updated 2 months ago