abehou / SemStamp
Repo for SemStamp (NAACL2024) and k-SemStamp (ACL2024)
☆20Updated 4 months ago
Alternatives and similar repositories for SemStamp:
Users that are interested in SemStamp are comparing it to the libraries listed below
- multi-bit language model watermarking (NAACL 24)☆13Updated 7 months ago
- Repository for Towards Codable Watermarking for Large Language Models☆36Updated last year
- A survey on harmful fine-tuning attack for large language model☆157Updated last week
- Code and data of the ACL-IJCNLP 2021 paper "Hidden Killer: Invisible Textual Backdoor Attacks with Syntactic Trigger"☆42Updated 2 years ago
- Robust natural language watermarking using invariant features☆25Updated last year
- [ACL2024-Main] Data and Code for WaterBench: Towards Holistic Evaluation of LLM Watermarks☆24Updated last year
- ☆25Updated 6 months ago
- Source code of paper "An Unforgeable Publicly Verifiable Watermark for Large Language Models" accepted by ICLR 2024☆33Updated 10 months ago
- ☆47Updated 3 months ago
- Code and data for paper "A Semantic Invariant Robust Watermark for Large Language Models" accepted by ICLR 2024.☆28Updated 5 months ago
- ☆18Updated last year
- ☆16Updated last week
- [ICLR24] Official Repo of BadChain: Backdoor Chain-of-Thought Prompting for Large Language Models☆33Updated 8 months ago
- Composite Backdoor Attacks Against Large Language Models☆13Updated last year
- This is the code repository for "Uncovering Safety Risks of Large Language Models through Concept Activation Vector"☆36Updated 5 months ago
- [NAACL 25 Demo] TrustEval: A modular and extensible toolkit for comprehensive trust evaluation of generative foundation models (GenFMs)☆97Updated last week
- Accepted by ECCV 2024☆122Updated 6 months ago
- This is the code repository of our submission: Understanding the Dark Side of LLMs’ Intrinsic Self-Correction.☆56Updated 3 months ago
- Official repository of the paper: Who Wrote this Code? Watermarking for Code Generation (ACL 2024)☆33Updated 10 months ago
- ☆20Updated last year
- Awesome Large Reasoning Model(LRM) Safety.This repository is used to collect security-related research on large reasoning models such as …☆63Updated this week
- ☆19Updated 10 months ago
- UP-TO-DATE LLM Watermark paper. 🔥🔥🔥☆337Updated 4 months ago
- ☆24Updated 2 months ago
- ☆44Updated 8 months ago
- ☆18Updated 7 months ago
- A toolbox for backdoor attacks.☆21Updated 2 years ago
- ☆14Updated 3 weeks ago
- [CIKM 2024] Trojan Activation Attack: Attack Large Language Models using Activation Steering for Safety-Alignment.☆23Updated 8 months ago
- MASTERKEY is a framework designed to explore and exploit vulnerabilities in large language model chatbots by automating jailbreak attacks…☆20Updated 7 months ago