BrianPulfer / LMWatermarkLinks
Implementation of 'A Watermark for Large Language Models' paper by Kirchenbauer & Geiping et. al.
☆24Updated 2 years ago
Alternatives and similar repositories for LMWatermark
Users that are interested in LMWatermark are comparing it to the libraries listed below
Sorting:
- Official Implementation of the paper "Three Bricks to Consolidate Watermarks for LLMs"☆50Updated 2 years ago
- Code for watermarking language models☆85Updated last year
- [EMNLP 2022] Distillation-Resistant Watermarking (DRW) for Model Protection in NLP☆13Updated 2 years ago
- Source code of paper "An Unforgeable Publicly Verifiable Watermark for Large Language Models" accepted by ICLR 2024☆34Updated last year
- [ICCV 2023] Source code for our paper "Rickrolling the Artist: Injecting Invisible Backdoors into Text-Guided Image Generation Models".☆65Updated 2 years ago
- ☆48Updated last year
- ☆43Updated 2 years ago
- Watermarking Text Generated by Black-Box Language Models☆39Updated 2 years ago
- The official repository of the paper "On the Exploitability of Instruction Tuning".☆70Updated 2 years ago
- Official code implementation of SKU, Accepted by ACL 2024 Findings☆20Updated last year
- Code and data for paper "A Semantic Invariant Robust Watermark for Large Language Models" accepted by ICLR 2024.☆36Updated last year
- ☆164Updated last year
- ☆27Updated 11 months ago
- ☆40Updated last year
- [ICLR 2025] Permute-and-Flip: An optimally robust and watermarkable decoder for LLMs☆19Updated 10 months ago
- ☆25Updated 2 years ago
- ☆184Updated 2 months ago
- ☆24Updated last year
- Safe Unlearning: A Surprisingly Effective and Generalizable Solution to Defend Against Jailbreak Attacks☆32Updated last year
- ☆22Updated last year
- NeurIPS'24 - LLM Safety Landscape☆39Updated 3 months ago
- Towards Machine Unlearning Benchmarks: Forgetting the Personal Identities in Facial Recognition Systems☆66Updated 8 months ago
- ☆14Updated last year
- Code for the paper "BadPrompt: Backdoor Attacks on Continuous Prompts"☆40Updated last year
- [ICML 2024] Watermarks in the Sand: Impossibility of Strong Watermarking for Generative Models☆24Updated last year
- ☆27Updated 2 years ago
- Official Pytorch repo of CVPR'23 and NeurIPS'23 papers on understanding replication in diffusion models.☆113Updated 2 years ago
- [ICLR 2024] Provable Robust Watermarking for AI-Generated Text☆38Updated 2 years ago
- Code for our paper "Defending ChatGPT against Jailbreak Attack via Self-Reminder" in NMI.☆56Updated 2 years ago
- Implementation for Poison Attacks against Text Datasets with Conditional Adversarially Regularized Autoencoder (EMNLP-Findings 2020)☆15Updated 5 years ago