abehou / SemStampLinks
Repo for SemStamp (NAACL2024) and k-SemStamp (ACL2024)
β26Updated last year
Alternatives and similar repositories for SemStamp
Users that are interested in SemStamp are comparing it to the libraries listed below
Sorting:
- multi-bit language model watermarking (NAACL 24)β17Updated last year
- UP-TO-DATE LLM Watermark paper. π₯π₯π₯β367Updated last year
- Repository for Towards Codable Watermarking for Large Language Modelsβ38Updated 2 years ago
- β32Updated last month
- A survey on harmful fine-tuning attack for large language modelβ225Updated last month
- β21Updated last year
- β17Updated 7 months ago
- Robust natural language watermarking using invariant featuresβ28Updated 2 years ago
- β21Updated last year
- Accepted by ECCV 2024β178Updated last year
- [USENIX Security 2025] PoisonedRAG: Knowledge Corruption Attacks to Retrieval-Augmented Generation of Large Language Modelsβ220Updated last month
- [NDSS 2025] Official code for our paper "Explanation as a Watermark: Towards Harmless and Multi-bit Model Ownership Verification via Wateβ¦β45Updated last year
- This is the code repository for "Uncovering Safety Risks of Large Language Models through Concept Activation Vector"β47Updated 2 months ago
- [ACL2024-Main] Data and Code for WaterBench: Towards Holistic Evaluation of LLM Watermarksβ29Updated 2 years ago
- This is the code repository of our submission: Understanding the Dark Side of LLMsβ Intrinsic Self-Correction.β63Updated last year
- β37Updated last year
- Source code of paper "An Unforgeable Publicly Verifiable Watermark for Large Language Models" accepted by ICLR 2024β34Updated last year
- β40Updated last year
- β71Updated 6 months ago
- Code and data for paper "A Semantic Invariant Robust Watermark for Large Language Models" accepted by ICLR 2024.β37Updated last year
- Safety at Scale: A Comprehensive Survey of Large Model Safetyβ214Updated 3 weeks ago
- Official Code for ACL 2024 paper "GradSafe: Detecting Unsafe Prompts for LLMs via Safety-Critical Gradient Analysis"β60Updated last year
- A curated list of trustworthy Generative AI papers. Daily updating...β75Updated last year
- Accepted by IJCAI-24 Survey Trackβ225Updated last year
- β113Updated 10 months ago
- Code and data for paper "Can Watermarked LLMs be Identified by Users via Crafted Prompts?" Accepted by ICLR 2025 (Spotlight)β28Updated 11 months ago
- Official repository of the paper: Who Wrote this Code? Watermarking for Code Generation (ACL 2024)β39Updated last year
- Awesome Large Reasoning Model(LRM) Safety.This repository is used to collect security-related research on large reasoning models such as β¦β78Updated this week
- A collection list for Large Language Model (LLM) Watermarkβ53Updated 10 months ago
- Source code and scripts for the paper "Is Difficulty Calibration All We Need? Towards More Practical Membership Inference Attacks"β20Updated last year