abehou / SemStamp
Repo for SemStamp (NAACL2024) and k-SemStamp (ACL2024)
☆17Updated 3 months ago
Alternatives and similar repositories for SemStamp:
Users that are interested in SemStamp are comparing it to the libraries listed below
- multi-bit language model watermarking (NAACL 24)☆13Updated 6 months ago
- Repository for Towards Codable Watermarking for Large Language Models☆36Updated last year
- Code and data of the ACL-IJCNLP 2021 paper "Hidden Killer: Invisible Textual Backdoor Attacks with Syntactic Trigger"☆42Updated 2 years ago
- Robust natural language watermarking using invariant features☆25Updated last year
- ☆18Updated last year
- [ACL2024-Main] Data and Code for WaterBench: Towards Holistic Evaluation of LLM Watermarks☆23Updated last year
- Code and data for paper "Can LLM Watermarks Robustly Prevent Unauthorized Knowledge Distillation?".☆10Updated last month
- Composite Backdoor Attacks Against Large Language Models☆12Updated 11 months ago
- ☆25Updated 5 months ago
- ☆9Updated 3 years ago
- Accepted by ECCV 2024☆112Updated 5 months ago
- Awesome Large Reasoning Model(LRM) Safety.This repository is used to collect security-related research on large reasoning models such as …☆53Updated this week
- A survey on harmful fine-tuning attack for large language model☆153Updated this week
- ☆14Updated this week
- Code and data for paper "A Semantic Invariant Robust Watermark for Large Language Models" accepted by ICLR 2024.☆27Updated 4 months ago
- Official repository of the paper: Who Wrote this Code? Watermarking for Code Generation (ACL 2024)☆33Updated 9 months ago
- Source code of paper "An Unforgeable Publicly Verifiable Watermark for Large Language Models" accepted by ICLR 2024☆32Updated 10 months ago
- ☆42Updated 2 months ago
- Code for paper: PoisonPrompt: Backdoor Attack on Prompt-based Large Language Models, IEEE ICASSP 2024. Demo//124.220.228.133:11107☆17Updated 7 months ago
- ☆17Updated 6 months ago
- ☆18Updated 9 months ago
- A curated list of trustworthy Generative AI papers. Daily updating...☆71Updated 6 months ago
- [ACL 2024] Defending Large Language Models Against Jailbreaking Attacks Through Goal Prioritization☆20Updated 8 months ago
- Unofficial implementation of "Backdooring Instruction-Tuned Large Language Models with Virtual Prompt Injection"☆18Updated 8 months ago
- ☆36Updated 7 months ago
- This is the code repository of our submission: Understanding the Dark Side of LLMs’ Intrinsic Self-Correction.☆55Updated 3 months ago
- ☆43Updated 7 months ago
- Code&Data for the paper "Watch Out for Your Agents! Investigating Backdoor Threats to LLM-Based Agents" [NeurIPS 2024]☆63Updated 5 months ago
- ☆20Updated last year
- [ICLR24] Official Repo of BadChain: Backdoor Chain-of-Thought Prompting for Large Language Models☆31Updated 8 months ago