hongcheki / sweet-watermark
Official repository of the paper: Who Wrote this Code? Watermarking for Code Generation (ACL 2024)
☆23Updated 3 months ago
Related projects: ⓘ
- Source code of paper "An Unforgeable Publicly Verifiable Watermark for Large Language Models" accepted by ICLR 2024☆23Updated 3 months ago
- Robust natural language watermarking using invariant features☆25Updated 11 months ago
- Code for watermarking language models☆69Updated last week
- [ACL2024-Main] Data and Code for WaterBench: Towards Holistic Evaluation of LLM Watermarks☆17Updated 10 months ago
- Repository for Towards Codable Watermarking for Large Language Models☆26Updated last year
- Code and data for paper "A Semantic Invariant Robust Watermark for Large Language Models" accepted by ICLR 2024.☆25Updated 3 months ago
- ☆18Updated 3 months ago
- Official repository for ICML 2024 paper "On Prompt-Driven Safeguarding for Large Language Models"☆64Updated 2 weeks ago
- [ICLR 2024] Provable Robust Watermarking for AI-Generated Text☆25Updated 9 months ago
- Code and data of the ACL-IJCNLP 2021 paper "Hidden Killer: Invisible Textual Backdoor Attacks with Syntactic Trigger"☆37Updated 2 years ago
- [ECCV 2024] Official PyTorch Implementation of "How Many Unicorns Are in This Image? A Safety Evaluation Benchmark for Vision LLMs"☆61Updated 9 months ago
- Accepted by ECCV 2024☆59Updated 2 months ago
- ☆14Updated 2 months ago
- Official repo for paper "SOUL: Unlocking the Power of Second-Order Optimization for LLM Unlearning"☆13Updated 4 months ago
- A resource repository for machine unlearning in large language models☆131Updated this week
- Code for our paper "Defending ChatGPT against Jailbreak Attack via Self-Reminder" in NMI.☆40Updated 10 months ago
- ☆32Updated 11 months ago
- Official Implementation of the paper "Three Bricks to Consolidate Watermarks for LLMs"☆41Updated 7 months ago
- [ACL 2024] Defending Large Language Models Against Jailbreaking Attacks Through Goal Prioritization☆11Updated 2 months ago
- UP-TO-DATE LLM Watermark paper. 🔥🔥🔥☆253Updated 3 months ago
- Submission Guide + Discussion Board for AI Singapore Global Challenge for Safe and Secure LLMs (Track 1A).☆16Updated 2 months ago
- ☆21Updated 2 months ago
- ☆11Updated 10 months ago
- ☆27Updated 3 months ago
- [EMNLP 2023] Poisoning Retrieval Corpora by Injecting Adversarial Passages https://arxiv.org/abs/2310.19156☆21Updated 9 months ago
- Official Code for Paper: Assessing the Brittleness of Safety Alignment via Pruning and Low-Rank Modifications☆55Updated 2 months ago
- Watermarking Text Generated by Black-Box Language Models☆28Updated 9 months ago
- ☆27Updated 3 months ago
- Code for the paper "BadPrompt: Backdoor Attacks on Continuous Prompts"☆32Updated 2 months ago
- RWKU: Benchmarking Real-World Knowledge Unlearning for Large Language Models☆50Updated 2 months ago