abehou / SemStampLinks
Repo for SemStamp (NAACL2024) and k-SemStamp (ACL2024)
☆20Updated 5 months ago
Alternatives and similar repositories for SemStamp
Users that are interested in SemStamp are comparing it to the libraries listed below
Sorting:
- multi-bit language model watermarking (NAACL 24)☆13Updated 8 months ago
- Repository for Towards Codable Watermarking for Large Language Models☆37Updated last year
- [ACL2024-Main] Data and Code for WaterBench: Towards Holistic Evaluation of LLM Watermarks☆26Updated last year
- Code and data for paper "A Semantic Invariant Robust Watermark for Large Language Models" accepted by ICLR 2024.☆32Updated 6 months ago
- Robust natural language watermarking using invariant features☆25Updated last year
- ☆18Updated 3 weeks ago
- ☆28Updated 7 months ago
- ☆18Updated 8 months ago
- Code and data of the ACL-IJCNLP 2021 paper "Hidden Killer: Invisible Textual Backdoor Attacks with Syntactic Trigger"☆43Updated 2 years ago
- Official repository of the paper: Who Wrote this Code? Watermarking for Code Generation (ACL 2024)☆34Updated last year
- [ACL 2024] Defending Large Language Models Against Jailbreaking Attacks Through Goal Prioritization☆24Updated 10 months ago
- Accepted by ECCV 2024☆130Updated 7 months ago
- This is the code repository of our submission: Understanding the Dark Side of LLMs’ Intrinsic Self-Correction.☆56Updated 5 months ago
- Source code of paper "An Unforgeable Publicly Verifiable Watermark for Large Language Models" accepted by ICLR 2024☆34Updated last year
- "In-Context Unlearning: Language Models as Few Shot Unlearners". Martin Pawelczyk, Seth Neel* and Himabindu Lakkaraju*; ICML 2024.☆26Updated last year
- Code and data for paper "Can LLM Watermarks Robustly Prevent Unauthorized Knowledge Distillation?". (ACL 2025 Main)☆14Updated 3 months ago
- [USENIX Security'24] REMARK-LLM: A robust and efficient watermarking framework for generative large language models☆25Updated 7 months ago
- ☆19Updated last year
- This is the code repository for "Uncovering Safety Risks of Large Language Models through Concept Activation Vector"☆39Updated 6 months ago
- Official Code for ACL 2024 paper "GradSafe: Detecting Unsafe Prompts for LLMs via Safety-Critical Gradient Analysis"☆56Updated 7 months ago
- ☆27Updated last month
- ☆38Updated 9 months ago
- ☆25Updated 7 months ago
- ☆54Updated 2 weeks ago
- ☆46Updated 2 months ago
- [CIKM 2024] Trojan Activation Attack: Attack Large Language Models using Activation Steering for Safety-Alignment.☆23Updated 10 months ago
- ☆21Updated last year
- ☆26Updated 3 months ago
- A survey on harmful fine-tuning attack for large language model☆178Updated last week
- Comprehensive Assessment of Trustworthiness in Multimodal Foundation Models☆21Updated 2 months ago