hlzhang109 / impossibility-watermark
[ICML 2024] Watermarks in the Sand: Impossibility of Strong Watermarking for Generative Models
☆22Updated 8 months ago
Alternatives and similar repositories for impossibility-watermark
Users that are interested in impossibility-watermark are comparing it to the libraries listed below
Sorting:
- Official Implementation of the paper "Three Bricks to Consolidate Watermarks for LLMs"☆46Updated last year
- Code for watermarking language models☆79Updated 8 months ago
- ☆29Updated 11 months ago
- ☆54Updated 2 years ago
- Official repository for "PostMark: A Robust Blackbox Watermark for Large Language Models"☆26Updated 8 months ago
- Code and data for paper "A Semantic Invariant Robust Watermark for Large Language Models" accepted by ICLR 2024.☆30Updated 6 months ago
- [ICLR 2024] Provable Robust Watermarking for AI-Generated Text☆32Updated last year
- ☆33Updated 4 months ago
- ☆34Updated last year
- ☆24Updated 3 months ago
- ☆42Updated 3 months ago
- ☆53Updated last year
- ☆27Updated 2 months ago
- Privacy backdoors☆51Updated last year
- Implementation of 'A Watermark for Large Language Models' paper by Kirchenbauer & Geiping et. al.☆23Updated 2 years ago
- Official Repository for Dataset Inference for LLMs☆33Updated 9 months ago
- ☆39Updated 7 months ago
- ☆18Updated last year
- ☆20Updated 5 months ago
- Official Code for "Baseline Defenses for Adversarial Attacks Against Aligned Language Models"☆23Updated last year
- This code is the official implementation of WEvade.☆38Updated last year
- ☆20Updated last year
- [NeurIPS 2023] Differentially Private Image Classification by Learning Priors from Random Processes☆12Updated last year
- Source code of paper "An Unforgeable Publicly Verifiable Watermark for Large Language Models" accepted by ICLR 2024☆33Updated 11 months ago
- This is the starter kit for the Trojan Detection Challenge 2023 (LLM Edition), a NeurIPS 2023 competition.☆86Updated 11 months ago
- Code for paper "Universal Jailbreak Backdoors from Poisoned Human Feedback"☆53Updated last year
- Improved Few-Shot Jailbreaking Can Circumvent Aligned Language Models and Their Defenses (NeurIPS 2024)☆61Updated 4 months ago
- Repo for arXiv preprint "Gradient-based Adversarial Attacks against Text Transformers"☆107Updated 2 years ago
- Comprehensive Assessment of Trustworthiness in Multimodal Foundation Models☆20Updated 2 months ago
- ☆16Updated last year