XuandongZhao / DRW
[EMNLP 2022] Distillation-Resistant Watermarking (DRW) for Model Protection in NLP
☆13Updated last year
Alternatives and similar repositories for DRW:
Users that are interested in DRW are comparing it to the libraries listed below
- Code for the paper "BadPrompt: Backdoor Attacks on Continuous Prompts"☆36Updated 10 months ago
- Code for the paper "Rethinking Stealthiness of Backdoor Attack against NLP Models" (ACL-IJCNLP 2021)☆24Updated 3 years ago
- ☆21Updated last year
- Updated 11 months ago
- ☆20Updated 5 months ago
- Codes for reproducing the results of the paper "Bridging Mode Connectivity in Loss Landscapes and Adversarial Robustness" published at IC…☆27Updated 5 years ago
- Code for paper "Universal Jailbreak Backdoors from Poisoned Human Feedback"☆52Updated last year
- Code and data for paper "A Semantic Invariant Robust Watermark for Large Language Models" accepted by ICLR 2024.☆29Updated 5 months ago
- ☆42Updated 3 months ago
- [ICML 2023] Protecting Language Generation Models via Invisible Watermarking☆13Updated last year
- [CVPR 2022] "Quarantine: Sparsity Can Uncover the Trojan Attack Trigger for Free" by Tianlong Chen*, Zhenyu Zhang*, Yihua Zhang*, Shiyu C…☆26Updated 2 years ago
- ☆19Updated last year
- Code for the paper "RAP: Robustness-Aware Perturbations for Defending against Backdoor Attacks on NLP Models" (EMNLP 2021)☆24Updated 3 years ago
- ☆42Updated last year
- ☆13Updated 2 years ago
- ☆25Updated 3 years ago
- Backdoor Safety Tuning (NeurIPS 2023 & 2024 Spotlight)☆25Updated 5 months ago
- Source code of paper "An Unforgeable Publicly Verifiable Watermark for Large Language Models" accepted by ICLR 2024☆33Updated 11 months ago
- [NeurIPS 2022] "Randomized Channel Shuffling: Minimal-Overhead Backdoor Attack Detection without Clean Datasets" by Ruisi Cai*, Zhenyu Zh…☆20Updated 2 years ago
- [ICLR 2024] Provable Robust Watermarking for AI-Generated Text☆32Updated last year
- ☆13Updated last year
- [CCS-LAMPS'24] LLM IP Protection Against Model Merging☆14Updated 6 months ago
- RAB: Provable Robustness Against Backdoor Attacks☆39Updated last year
- Implementation for Poison Attacks against Text Datasets with Conditional Adversarially Regularized Autoencoder (EMNLP-Findings 2020)☆15Updated 4 years ago
- Official repository for "Robust Prompt Optimization for Defending Language Models Against Jailbreaking Attacks"☆51Updated 9 months ago
- Repo for arXiv preprint "Gradient-based Adversarial Attacks against Text Transformers"☆108Updated 2 years ago
- Official repository for "PostMark: A Robust Blackbox Watermark for Large Language Models"☆24Updated 8 months ago
- Official Inplementation of CVPR23 paper "Backdoor Defense via Deconfounded Representation Learning"☆26Updated 2 years ago
- Implementation of the paper "Exploring the Universal Vulnerability of Prompt-based Learning Paradigm" on Findings of NAACL 2022☆29Updated 2 years ago
- [S&P'24] Test-Time Poisoning Attacks Against Test-Time Adaptation Models☆18Updated 2 months ago